Recent blog entries

22 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: conclusion

There's been a lot of content in this series about the Juno Nova mid-cycle meetup, so thanks to those who followed along with me. I've also received a lot of positive feedback about the posts, so I am thinking the exercise is worthwhile, and will try to be more organized for the next mid-cycle (and therefore get these posts out earlier). To recap quickly, here's what was covered in the series:

The first post in the series covered social issues: things like how we organized the mid-cycle meetup, how we should address core reviewer burnout, and the current state of play of the Juno release. Bug management has been an ongoing issue for Nova for a while, so we talked about bug management. We are making progress on this issue, but more needs to be done and it's going to take a lot of help for everyone to get there. There was also discussion about proposals on how to handle review workload in the Kilo release, although nothing has been finalized yet.

The second post covered the current state of play for containers in Nova, as well as our future direction. Unexpectedly, this was by far the most read post in the series if Google Analytics is to be believed. There is clear interest in support for containers in Nova. I expect this to be a hot topic at the Paris summit as well. Another new feature we're working on is the Ironic driver merge into Nova. This is progressing well, and we hope to have it fully merged by the end of the Juno release cycle.

At a superficial level the post about DB2 support in Nova is a simple tale of IBM's desire to have people use their database. However, to the skilled observer its deeper than that -- its a tale of love and loss, as well as a discussion of how to safely move our schema forward without causing undue pain for our large deployments. We also covered the state of cells support in Nova, with the main issue being that we really need cells to be feature complete. Hopefully people are working on a plan for this now. Another internal refactoring is the current scheduler work, which is important because it positions us for the future.

We also discussed the next gen Nova API, and talked through the proposed upgrade path for the transition from nova-network to neutron.

For those who are curious, there are 8,259 words (not that I am counting or anything) in this post series including this summary post. I estimate it took me about four working days to write (ED: and about two days for his trained team of technical writers to edit into mostly coherent English). I would love to get your feedback on if you found the series useful as it's a pretty big investment in time.

Tags for this post: openstack juno nova mid-cycle summary
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

Syndicated 2014-08-21 23:47:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

22 Aug 2014 etbe   » (Master)

Men Commenting on Women’s Issues

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts’o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it’s insensitive to needlessly debate the statistics.

Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn’t end when the conference ends (it can continue on mailing lists etc), that good people shouldn’t do nothing when bad things happen, and that free speech doesn’t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction.

Craig Sanders wrote a very misguided post about the Ted Ts’o situation [3]. One of the many things wrong with his post is his statement “I’m particularly disgusted by the men who intervene way too early – without an explicit invitation or request for help or a clear need such as an immediate threat of violence – in womens’ issues“.

I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing.

Asking for Help

I don’t believe that members of minority groups should have to ask for help. Asking isn’t easy, having someone spontaneously offer help because it’s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named “Women Don’t Ask” which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named “Women Don’t Ask” is an indication that we shouldn’t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as “complaining”, so I think we should consider a “complaint” from a woman as a direct request to do something.

The geek feminism blog has an article titled “How To Exclude Women Without Really Trying” which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on “Allies” which includes “Even a blog post helps” [6]. It seems clear from public web sites run by women that women really want men to be involved.

Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same.

One thing that we really don’t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn’t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene.

Men Don’t Listen to Women

Rebecca Solnit wrote an article about being ignored by men titled “Men Explain Things to Me” [7]. When discussing women’s issues the term “Mansplaining” is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women’s issues are the ones who are least likely to listen to women. This implies that other men have to teach them.

Craig says that women need “space to discover and practice their own strength and their own voices“. I think that the best way to achieve that goal is to listen when women speak. Of course that doesn’t preclude speaking as well, just listen first, listen carefully, and listen more than you speak.

Craig claims that when men like me and Matthew Garrett comment on such issues we are making “women’s spaces more comfortable, more palatable, for men“. From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don’t like. Matthew has covered both of those so it seems that he’s doing a good job of making men uncomfortable – I think that this is a good thing, a discussion that is “comfortable and palatable” for the people in power is not going to be any good for the people who aren’t in power.

The Voting Aspect

It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term “apologist for rapists” in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already.

These same patterns always happen so making a comment in support makes a difference. It doesn’t have to be insightful, long, or well written, merely “I agree” and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position.

I don’t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn’t going to be considered much by most people.

The Cost of Commenting

The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don’t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible.

The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn’t helping women in any way. It’s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost.

It’s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones.

Not Being Patronising

Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I’ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I’m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don’t).

It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it’s likely to be taken badly. It’s not just that women don’t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there’s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago.

Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can’t easily understand even when I spend some time considering the matter. So there will be things which don’t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn’t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me.

I don’t think it’s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I’ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won’t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn’t enough, you need to have conversations with multiple people so that they can point out the things you don’t understand.

This isn’t any sort of comprehensive list of ways to avoid being patronising, but it’s a few things which seem like common mistakes.

Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women’s issues.

Related posts:

  1. A Lack of Understanding of Nuclear Issues Ben Fowler writes about the issues related to nuclear power...

Syndicated 2014-08-22 01:54:26 from etbe - Russell Coker

22 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: to cut a cabbage leaf

Gentle Readers
a newsletter made for sharing
volume 1, number 19
21st August 2014: to cut a cabbage leaf
What I’ve been up to

I've been looking into PhD possibilities. But more of that later.

And we visited the John Rylands Library for the first time, a beautiful place in Victorian Gothic made as a memorial to a local industrialist. (The law students among you may know him as a party to Rylands v Fletcher.) It has an impressive collection of books and manuscripts, including the oldest known fragment of the New Testament, part of John's gospel copied only a few decades after the book was written.

As if that weren't enough, the building is quite breathtakingly beautiful. Here's part of the reading room:

http://thomasthurman.org/pics/rylands-1

The library also contains a dragon named Grumbold. Regular readers who remember Not Ordinarily Borrowable, a story of mine largely about dragons and libraries, may judge of my surprise to discover it coming true.

A poem of mine

STORYTELLING, PART II (T80)

When Merlin looked upon this land,
he knew by magic arts
the anger in the acts of men,
the hatred in their hearts:
he saw despair and deadly things,
and knew our hope must be
the magic when the kettle sings
to make a pot of tea.

When Galahad applied to sit
in splendour at the Table,
he swore an oath to fight for good
as far as he was able.
But Arthur put the kettle on,
and bade him sit and see
the goodness that is brought anon
by making pots of tea.

When Arthur someday shall return
in glory, with his knights,
he'll rout our foes and bless the poor
and put the land to rights.
And shall we drink his health in ale?
Not so! It seems to me
he'll meet us in the final tale
and share a pot of tea.

A picture

http://gentlereaders.uk/pics/caught-the-sun

I was out fishing all day,
and I seem to have caught the sun

Something wonderful

Suppose I asked you to name the world's great heroes? (For example, as you may recall, some talk of Alexander.) Well, in the Middle Ages, a fair amount of thought went into the list. Who was an example of virtue and valour; whose chivalry was worth emulating?

One such list is known in English as the Nine Worthies. It was drawn up in the early 1300s, and remained a popular theme in art for centuries after. Here they are in 1460, looking for all the world like a medieval pack of Top Trumps:

http://gentlereaders.uk/pics/nine-worthies

Even though some of these men had lived (or were supposed to have lived) millennia earlier, they are all drawn wearing armour of the time, and bearing their own coat of arms, as if they lived in that very moment. This is because they are deliberately idealised-- after all, as a careful perusal of the Old Testament will show, not all of them were in fact models of chivalry.

They are divided into three groups of three: three Jewish heroes, three Christian heroes, and three pagan heroes-- that is, pagan in the old sense of not following an Abrahamic religion.

The Jewish heroes are: Joshua the son of Nun, who led the invasion of Canaan; David the son of Jesse, who became king and wrote psalms; and Judas Maccabeus, who led the revolt against the Syrians now commemorated by Hanukkah. (Don't confuse Judas Maccabeus with Judas Iscariot.)

The pagan heroes are: Hector of Troy, a great warrior of the Trojan War; Julius Caesar, the first emperor of Rome; and Alexander the Great.

The Christian heroes are: Arthur, the hero of the Matter of Britain; Charles the Great, also called Charlemagne, the first emperor of the Holy Roman Empire; and Godfrey of Bouillon, who became the first crusader king of Jerusalem but disclaimed the title.

I am particularly interested by the heraldry. How did they make up new and unique coats of arms for people who had been dead for three thousand years? David has a harp because he composed psalms (and not because he was king of Ireland). Julius has an eagle rather like the one on the Roman standard; Charles has the same, appropriately for someone who was also trying to become Emperor of Rome, but combined with the lily pattern known as "France Ancient". Others of them are baffling to me: what is Joshua bearing, for example? I did find a reference to the arms they made up for Alexander in a book, but frustratingly I ran out of time to research this.

I am glad to report that there were also nine female Worthies to balance out the nine men. Unfortunately none of the writers seem to agree about which nine women they were.

Something from someone else

When a certain Charles Macklin claimed he could repeat any sentence he heard, no matter how complex, Samuel Foote allegedly composed this sentence impromptu:

THE GREAT PANJANDRUM
by Samuel Foote

So she went into the garden
to cut a cabbage-leaf
to make an apple-pie;
and at the same time
a great she-bear, coming down the street,
pops its head into the shop.
What! no soap?
So he died,
and she very imprudently married the Barber:
and there were present
the Picninnies,
and the Joblillies,
and the Garyulies,
and the great Panjandrum himself,
with the little round button at top;
and they all fell to playing the game of catch-as-catch-can,
till the gunpowder ran out at the heels of their boots

Colophon
Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://gentlereaders.uk/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/309388.html. Please comment there using OpenID.

Syndicated 2014-08-22 00:30:41 from Monument

22 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: the next generation Nova API

This is the final post in my series covering the highlights from the Juno Nova mid-cycle meetup. In this post I will cover our next generation API, which used to be called the v3 API but is largely now referred to as the v2.1 API. Getting to this point has been one of the more painful processes I think I've ever seen in Nova's development history, and I think we've learnt some important things about how large distributed projects operate along the way. My hope is that we remember these lessons next time we hit something as contentious as our API re-write has been.

Now on to the API itself. It started out as an attempt to improve our current API to be more maintainable and less confusing to our users. We deliberately decided that we would not focus on adding features, but instead attempt to reduce as much technical debt as possible. This development effort went on for about a year before we realized we'd made a mistake. The mistake we made is that we assumed that our users would agree it was trivial to move to a new API, and that they'd do that even if there weren't compelling new features, which it turned out was entirely incorrect.

I want to make it clear that this wasn't a mistake on the part of the v3 API team. They implemented what the technical leadership of Nova at the time asked for, and were very surprised when we discovered our mistake. We've now spent over a release cycle trying to recover from that mistake as gracefully as possible, but the upside is that the API we will be delivering is significantly more future proof than what we have in the current v2 API.

At the Atlanta Juno summit, it was agreed that the v3 API would never ship in its current form, and that what we would instead do is provide a v2.1 API. This API would be 99% compatible with the current v2 API, with the incompatible things being stuff like if you pass a malformed parameter to the API we will now tell you instead of silently ignoring it, which we call 'input validation'. The other thing we are going to add in the v2.1 API is a system of 'micro-versions', which allow a client to specify what version of the API it understands, and for the server to gracefully degrade to older versions if required.

This micro-version system is important, because the next step is to then start adding the v3 cleanups and fixes into the v2.1 API, but as a series of micro-versions. That way we can drag the majority of our users with us into a better future, without abandoning users of older API versions. I should note at this point that the mechanics for deciding what the minimum micro-version a version of Nova will support are largely undefined at the moment. My instinct is that we will tie to stable release versions in some way; if your client dates back to a release of Nova that we no longer support, then we might expect you to upgrade. However, that hasn't been debated yet, so don't take my thoughts on that as rigid truth.

Frustratingly, the intent of the v2.1 API has been agreed and unchanged since the Atlanta summit, yet we're late in the Juno release and most of the work isn't done yet. This is because we got bogged down in the mechanics of how micro-versions will work, and how the translation for older API versions will work inside the Nova code later on. We finally unblocked this at the mid-cycle meetup, which means this work can finally progress again.

The main concern that we needed to resolve at the mid-cycle was the belief that if the v2.1 API was implemented as a series of translations on top of the v3 code, then the translation layer would be quite thick and complicated. This raises issues of maintainability, as well as the amount of code we need to understand. The API team has now agreed to produce an API implementation that is just the v2.1 functionality, and will then layer things on top of that. This is actually invisible to users of the API, but it leaves us with an implementation where changes after v2.1 are additive, which should be easier to maintain.

One of the other changes in the original v3 code is that we stopped proxying functionality for Neutron, Cinder and Glance. With the decision to implement a v2.1 API instead, we will need to rebuild that proxying implementation. To unblock v2.1, and based on advice from the HP and Rackspace public cloud teams, we have decided to delay implementing these proxies. So, the first version of the v2.1 API we ship will not have proxies, but later versions will add them in. The current v2 API implementation will not be removed until all the proxies have been added to v2.1. This is prompted by the belief that many advanced API users don't use the Nova API proxies, and therefore could move to v2.1 without them being implemented.

Finally, I want to thank the Nova API team, especially Chris Yeoh and Kenichi Oomichi for their patience with us while we have worked through these complicated issues. It's much appreciated, and I find them a consistent pleasure to work with.

That brings us to the end of my summary of the Nova Juno mid-cycle meetup. I'll write up a quick summary post that ties all of the posts together, but apart from that this series is now finished. Thanks for following along.

Tags for this post: openstack juno nova mid-cycle summary api v3 v2.1
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

Syndicated 2014-08-21 16:52:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

21 Aug 2014 mikal   » (Journeyer)

Don't Tell Mum I Work On The Rigs




ISBN: 1741146984
LibraryThing
I read this book while on a flight a few weeks ago. Its surprisingly readable and relatively short -- you can knock it over in a single long haul flight. The book covers the memoirs of an oil rig worker, from childhood right through to middle age. That's probably the biggest weakness of the book, it just kind of stops when the writer reaches the present day. I felt there wasn't really a conclusion, which was disappointing.
An interesting fun read however.

Tags for this post: book paul_carter oil rig memoir
Related posts: Extreme Machines: Eirik Raude; New Orleans and sea level; Kern County oil wells on I-5; What is the point that people's morals evaporate?


Comment

Syndicated 2014-08-21 13:45:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

21 Aug 2014 Stevey   » (Master)

Updating Debian Administration

Recently I've been getting annoyed with the Debian Administration website; too often it would be slower than it should be considering the resources behind it.

As a brief recap I have six nodes:

  • 1 x MySQL Database - The only MySQL database I personally manage these days.
  • 4 x Web Nodes.
  • 1 x Misc server.

The misc server is designed to display events. There is a node.js listener which receives UDP messages and stores them in a rotating buffer. The messages might contain things like "User bob logged in", "Slaughter ran", etc. It's a neat hack which gives a good feeling of what is going on cluster-wide.

I need to rationalize that code - but there's a very simple predecessor posted on github for the curious.

Anyway enough diversions, the database is tuned, and "small". The misc server is almost entirely irrelevent, non-public, and not explicitly advertised.

So what do the web nodes run? Well they run a lot. Potentially.

Each web node has four services configured:

  • Apache 2.x - All nodes.
  • uCarp - All nodes.
  • Pound - Master node.
  • Varnish - Master node.

Apache runs the main site, listening on *:8080.

One of the nodes will be special and will claim a virtual IP provided via ucarp. The virtual IP is actually the end-point visitors hit, meaning we have:

Master Host Other hosts

Running:

  • Apache.
  • Pound.
  • Varnish

Running:

  • Apache.

Pound is configured to listen on the virtual IP and perform SSL termination. That means that incoming requests get proxied from "vip:443 -> vip:80". Varnish listens on "vip:80" and proxies to the back-end apache instances.

The end result should be high availability. In the typical case all four servers are alive, and all is well.

If one server dies, and it is not the master, then it will simply be dropped as a valid back-end. If a single server dies and it is the master then a new one will appear, thanks to the magic of ucarp, and the remaining three will be used as expected.

I'm sure there is a pathological case when all four hosts die, and at that point the site will be down, but that's something that should be atypical.

Yes, I am prone to over-engineering. The site doesn't have any availability requirements that justify this setup, but it is good to experiment and learn things.

So, with this setup in mind, with incoming requests (on average) being divided at random onto one of four hosts, why is the damn thing so slow?

We'll come back to that in the next post.

(Good news though; I fixed it ;)

Syndicated 2014-08-21 08:50:10 from Steve Kemp's Blog

20 Aug 2014 berend   » (Journeyer)

Another update on the Linux NFS server problems I had: massively high i/o by process jbd2/xvda1-8.

Problems came back. So switched from paravirtualisation to hvm. And initially that appeared to work.

But now, any time we launch another php5-fpm server, immediately jbd2/xvda1-8 jumps to it's max. Tearing my hear out.

20 Aug 2014 bagder   » (Master)

The “right” keyboard layout

I’ve never considered myself very picky about the particular keyboard I use for my machines. Sure, I work full-time and spare time in front of the same computer and thus I easily spend 2500-3000 hours a year in front of it but I haven’t thought much about it. I wish I had some actual stats on how many key-presses I do on my keyboard on an average day or year or so.

Then, one of these hot summer days this summer I left the roof window above my work place a little bit too much open when a very intense rain storm hit our neighborhood when I was away for a brief moment and to put it shortly, the huge amounts of water that poured in luckily only destroyed one piece of electronics for me: my trusty old keyboard. The keyboard I just randomly picked from some old computer without any consideration a bunch of years ago.

So the old was dead, I just picked another keyboard I had lying around.

But man, very soft rubber-style keys are very annoying to work with. Then I picked another with a weird layout and a control-key that required a little too much pressure to work for it to be comfortable. So, my race for a good enough keyboard had begun. Obviously I couldn’t just pick a random cheap new one and be happy with it.

Nordic key layout

That’s what they call it. It is even a Swedish layout, which among a few other details means it features å, ä and ö keys at a rather prominent place. See illustration. Those letters are used fairly frequently in our language. We have a few peculiarities in the Swedish layout that is downright impractical for programming, like how the {[]} – symbols all require AltGr pressed and slash, asterisk and underscore require Shift to be pressed etc. Still, I’v'e learned to program on such a layout so I’m quite used to those odd choices by now…

kb-nordic

Cursor keys

I want the cursor keys to be of “standard size”, have the correct location and relative positions. Like below. Also, the page up and page down keys should not be located close to the cursor keys (like many laptop keyboards do).

keyboard with marked cursorkeys

Page up and down

The page up and page down keys should instead be located in the group of six keys above the cursor keys. The group should have a little gap between it and the three keys (print screen, scroll lock and pause/break) above them so that finding the upper row is easy and quick without looking.

page up and down keysBackspace

I’m not really a good keyboard typist. I do a lot of mistakes and I need to use the backspace key quite a lot when doing so. Thus I’m a huge fan of the slightly enlarged backspace key layout so that I can find and hit that key easily. Also, the return key is a fairly important one so I like the enlarged and strangely shaped version of that as well. Pretty standard.

kb-backspaceFurther details

The Escape key should have a little gap below it so that I can find it easily without looking.

The Caps lock key is completely useless for locking caps is not something a normal person does, but it can be reprogrammed for other purposes. I’ve still refrained from doing so, mostly to not get accustomed to “weird” setups that makes it (even) harder for me to move between different keyboards at different places. Just recently I’ve configured it to work as ctrl – let’s see how that works out.

The F-keys are pretty useless. I use F5 sometimes to refresh web pages but as ctrl-r works just as well I don’t see a strong need for them in my life.

Numpad – a completely useless piece of the keyboard that I would love to get rid of – I never use any of those key. Never. Unfortunately I haven’t found any otherwise decent keyboards without the numpad.

Func KB-460

The Func KB-460 is the keyboard I ended up with this time in my search. It has some fun extra cruft such as two USB ports and a red backlight (that can be made to pulse). The backlight gave me extra points from my kids.

Func KB-460 keyboard

It is “mechanical” which obviously is some sort of thing among keyboards that has followers and is supposed to be very good. I remain optimistic about this particular model, even if there are a few minor things with it I haven’t yet gotten used to. I hope I’ll just get used to them.

How it could look

Based on my preferences and what keys I think I use, I figure an ideal keyboard layout for me could very well look like this:

my keyboard layout

Keyfreq

I have decided to go further and “scientifically” measure how I use my keyboard, which keys I use the most and similar data and metrics. Turns out the most common keylog program on Linux doesn’t log enough details, so I forked it and created keyfreq for this purpose. I’ll report details about this separately – soon.

Syndicated 2014-08-20 09:26:36 from daniel.haxx.se

20 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: nova-network to Neutron migration

This will be my second last post about the Juno Nova mid-cycle meetup, which covers the state of play for work on the nova-network to Neutron upgrade.

First off, some background information. Neutron (formerly Quantum) was developed over a long period of time to replace nova-network, and added to the OpenStack Folsom release. The development of new features for nova-network was frozen in the Nova code base, so that users would transition to Neutron. Unfortunately the transition period took longer than expected. We ended up having to unfreeze development of nova-network, in order to fix reliability problems that were affecting our CI gating and the reliability of deployments for existing nova-network users. Also, at least two OpenStack companies were carrying significant feature patches for nova-network, which we wanted to merge into the main code base.

You can see the announcement at http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html. The main enhancements post-freeze were a conversion to use our new objects infrastructure (and therefore conductor), as well as features that were being developed by Nebula. I can't find any contributions from the other OpenStack company in the code base at this time, so I assume they haven't been proposed.

The nova-network to Neutron migration path has come to the attention of the OpenStack Technical Committee, who have asked for a more formal plan to address Neutron feature gaps and deprecate nova-network. That plan is tracked at https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage. As you can see, there are still some things to be merged which are targeted for juno-3. At the time of writing this includes grenade testing; Neutron being the devstack default; a replacement for nova-network multi-host; a migration plan; and some documentation. They are all making good progress, but until these action items are completed, Nova can't start the process of deprecating nova-network.

The discussion at the Nova mid-cycle meetup was around the migration planning item in the plan. There is a Nova specification that outlines one possible plan for live upgrading instances (i.e, no instance downtime) at https://review.openstack.org/#/c/101921/, but this will probably now be replaced with a simpler migration path involving cold migrations. This is prompted by not being able to find a user that absolutely has to have live upgrade. There was some confusion, because of a belief that the TC was requiring a live upgrade plan. But as Russell Bryant says in the meetup etherpad:

"Note that the TC has made no such statement on migration expectations other than a migration path must exist, both projects must agree on the plan, and that plan must be submitted to the TC as a part of the project's graduation review (or project gap review in this case). I wouldn't expect the TC to make much of a fuss about the plan if both Nova and Neutron teams are in agreement."


The current plan is to go forward with a cold upgrade path, unless a user comes forward with an absolute hard requirement for a live upgrade, and a plan to fund developers to work on it.

At this point, it looks like we are on track to get all of the functionality we need from Neutron in the Juno release. If that happens, we will start the nova-network deprecation timer in Kilo, with my expectation being that nova-network would be removed in the "M" release. There is also an option to change the default networking implementation to Neutron before the deprecation of nova-network is complete, which will mean that new deployments are defaulting to the long term supported option.

In the next (and probably final) post in this series, I'll talk about the API formerly known as Nova API v3.

Tags for this post: openstack juno nova mid-cycle summary nova-network neutron migration
Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers

Comment

Syndicated 2014-08-19 20:37:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

19 Aug 2014 joey   » (Master)

using a debian package as the remote for a local config repo

Today I did something interesting with the Debian packaging for propellor, which seems like it could be a useful technique for other Debian packages as well.

Propellor is configured by a directory, which is maintained as a local git repository. In propellor's case, it's ~/.propellor/. This contains a lot of haskell files, in fact the entire source code of propellor! That's really unusual, but I think this can be generalized to any package whose configuration is maintained in its own git repository on the user's system. For now on, I'll refer to this as the config repo.

The config repo is set up the first time a user runs propellor. But, until now, I didn't provide an easy way to update the config repo when the propellor package was updated. Nothing would break, but the old version would be used until the user updated it themselves somehow (probably by pulling from a git repository over the network, bypassing apt's signature validation).

So, what I wanted was a way to update the config repo, merging in any changes from the new version of the Debian package, while preserving the user's local modifications. Ideally, the user could just run git merge upstream/master, where the upstream repo was included in the Debian package.

But, that can't work! The Debian package can't reasonably include the full git repository of propellor with all its history. So, any git repository included in the Debian binary package would need to be a synthetic one, that only contains probably one commit that is not connected to anything else. Which means that if the config repo was cloned from that repo in version 1.0, then when version 1.1 came around, git would see no common parent when merging 1.1 into the config repo, and the merge would fail horribly.

To solve this, let's assume that the config repo's master branch has a parent commit that can be identified, somehow, as coming from a past version of the Debian package. It doesn't matter which version, although the last one merged with will be best. (The easy way to do this is to set refs/heads/upstream/master to point to it when creating the config repo.)

Once we have that parent commit, we have three things:

  1. The current content of the config repo.
  2. The content from some old version of the Debian package.
  3. The new content of the Debian package.

Now git can be used to merge #3 onto #2, with -Xtheirs, so the result is a git commit with parents of #3 and #2, and content of #3. (This can be done using a temporary clone of the config repo to avoid touching its contents.)

Such a git commit can be merged into the config repo, without any conflicts other than those the user might have caused with their own edits.

So, propellor will tell the user when updates are available, and they can simply run git merge upstream/master to get them. The resulting history looks like this:

* Merge remote-tracking branch 'upstream/master'
|\  
| * merging upstream version
| |\  
| | * upstream version
* | user change
|/  
* upstream version

So, generalizing this, if a package has a lot of config files, and creates a git repository containing them when the user uses it (or automatically when it's installed), this method can be used to provide an easily mergable branch that tracks the files as distributed with the package.

It would perhaps not be hard to get from here to a full git-backed version of ucf. Note that the Debian binary package doesn't have to ship a git repisitory, it can just as easily ship the current version of the config files somewhere in /usr, and check them into a new empty repository as part of the generation of the upstream/master branch.

Syndicated 2014-08-19 22:55:20 from see shy jo

19 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: slots

If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.

If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.

If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.

The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).

The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.

How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.

What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.

We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.

We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.

Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.

This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.

This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.

Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management
Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: containers; Juno nova mid-cycle meetup summary: cells

Comment

Syndicated 2014-08-19 00:34:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

18 Aug 2014 ctrlsoft   » (Journeyer)

Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

Syndicated 2014-08-18 21:15:00 from Stationary Traveller

18 Aug 2014 wingo   » (Master)

on gnu and on hackers

Greetings, gentle hackfolk. 'Tis a lovely waning light as I write this here in Munich, Munich the green, Munich full of dogs and bikes, Munich the summer-fresh.

Last weekend was the GNU hackers meeting up in Garching, a village a few metro stops north of town. Garching is full of quiet backs and fruit trees and small gardens bursting with blooms and beans, as if an eddy of Chistopher Alexander whirled out and settled into this unlikely place. My French suburb could learn a thing or ten. We walked back from the hack each day, ate stolen apples and corn, and schemed the nights away.

The program of GHM this year was great. It started off with a bang, as GNUnet hackers Julian Kirsch and Christian Grothoff broke the story that the Five-Eyes countries (US, UK, Canada, Australia, NZ) regularly port-scan the entire internet, looking for vulnerabilities. They then proceed to exploit those vulnerabilities, in regular hack-a-thons, trying to own as many boxes in as many countries as they can. They then use them as launchpads for attacks and for exfiltration of information from other networks.

The presentation that broke this news also proposed a workaround based on port-knocking, Knock. Knock embeds the hash of a pre-shared key with some other information into the 32-bit initial sequence number of a TCP connection. Unlike previous incarnations of port-knocking, Knock also authenticates the first n payload bytes, so that the connection isn't vulnerable to hijacking (e.g. via GCHQ "quantum injection", where well-placed evil routers race the true destination server to provide the first response packet of a connection). Leaking the pwn-the-internet documents with Laura Poitras at the same time as the Knock unveiling was a pretty slick move!

I was also really impressed by Christian's presentation on the GNU name system. GNS is a replacement for DNS whose naming structure mirrors our social naming structure. For example, www.alice.gnu would be my friend Alice, and www.alice.bob.gnu would be Alice's friend Bob. With some integration, it can work on normal desktops and mobile devices. There are lots more details, so check gnunet.org/gns for more information.

Of course, a new naming system does need some operating system support. In this regard Ludovic Courtès' update on Guix was particularly impressive. Guix is a Nix-like system whose goal is reproducible, user-controlled GNU/Linux systems. A couple years ago I didn't think much of it, but now it's actually booting on raw hardware, not just under virtualization, and things seem to be rolling forth as if on rails. Guix manages to be technically innovative at the same time as being GNU-centered, so it can play a unique role in propagating GNU work like GNS.

and yet.

But now, as the dark clouds race above and the light truly goes, we arrive to the point I really wanted to discuss. GNU has a terrible problem with gender balance, and with diversity in general. Of about 70 attendees at this recent GHM, only two were women. We talk the talk about empowering users and working for freedom but, to a first approximation, it's really just a bunch of dudes that think the exact same things.

There are many reasons for this, of course. Some people like to focus on what's called the "pipeline problem" -- that there aren't as many women coming out of computer science programs as men. While true, the proportion of women CS graduates is much higher than the proportion of women at GHM events, so something must be happening in between. And indeed, the attrition rates of women in the tech industry are higher than that of men -- often because we men make it a needlessly unpleasant place for women to be. Sometimes it's even dangerous. The incidence of sexual harassment and assault in tech, especially at events, is something terrible. Scroll down in that linked page to June, July, and August 2014, and ask yourself whether that's OK. (Hint: hell no.)

And so you would think that people who consider themselves to be working for some abstract liberatory principle, as GNU is, would be happy to take a stand against this kind of asshaberdashery. There you would be wrong. Voilà a timeline of an incident.

timeline

March 2014
Someone at the FSF asks a GHM organizer to add an anti-harassment policy to GHM. The organizer does so and puts one on the web page, copying the text from Libreplanet's web site. The policy posted is:

Offensive or overly explicit sexual language or imagery is inappropriate during the event, including presentations.

Participants violating these rules may be sanctioned or expelled from the meeting at the discretion of the organizers.

Harassment includes offensive comments related to gender, sexual orientation, disability, appearance, body size, race, religion, sexual images in public spaces, deliberate intimidation, stalking, harassing photography or recording, persistent disruption of talks or other events, repeated unsolicited physical contact, or sexual attention.

Monday, 11 August 2014
The first mention of the policy is made on the mailing list, in a mail with details about the event that also includes the line:

An anti-harrasment policy applies at GHM: http://gnu.org/ghm/policy.html

Monday, 11 August 2014
A speaker writes the list to say:

Since I do not desire to be denounced, prosecuted and finally sanctioned or expelled from the event (especially considering the physical pain and inconvenience of attending due to my very recent accident) I withdraw my intention to lecture "Introducing GNU Posh" at the GHM, as it is not compliant with the policy described in the page above.

Please remove the talk from the official schedule. Thanks.

PS: for those interested, I may perform the talk off-event in case we find a suitable place, we will see..

The resulting thread goes totes clownshoes and rages on up until the event itself.
Friday, 15 August 2014
Sheepish looks between people that flamed each other over the internet. Hallway-track discussion starts up quickly though. Disagreeing people are not rational enough to have a conversation though (myself included).
Saturday, 16 August 2014
In the morning and lunch break, people start to really discuss the issues (spontaneously). It turns out that the original mail that sparked the thread was based, to an extent, on a misunderstanding: that "offensive or overly explicit sexual language or imagery" was parsed (by a few non-native English speakers) as "offensive language or ...", which people thought was too broad. Once this misunderstanding was removed, there were still people that thought that any policy at all was unneeded, and others that were concerned that someone could say something without intending offense, but then be kicked out of the event. Arguments back and forth. Some people wonder why others can be hurt by "just words". Some discussion of rape culture as continuum between physical violence and cultural tropes. One of the presentations after lunch is by a GNU hacker. He starts his talk by stating his hope that he won't be seen as "offensive or part of rape culture or something". His microphone wasn't on, so once he gets it on he repeats the joke. I stomp out, slam the door, and tweet a few angry things. Later in the evening the presenter and I discuss the issue. He apologizes to me.
Sunday, 17 August 2014
A closed meeting for GNU maintainers to round up the state of GNU and GHM. No women present. After dealing with a number of banalities, we finally broach the topic of the harassment policy. More opposition to the policy Sunday than Saturday lunch. Eventually a proposal is made to replace "offensive" with "disparaging", and people start to consent to that. We run out of time and have to leave; resolution unclear.
Monday, 18 August 2014
GHM organizer updates the policy to remove the words "offensive or" from the beginning of the harassment policy.

I think anyone involved would agree on this timeline.

thoughts

The problems seen over the last week with this anti-harassment policy are entirely to do with the men. It was a man who decided that he should withdraw his presentation because he found it offensive that he could be perceived as offensive. It was men who willfully misread the policy, comparing it to such examples as "I should have the right to offend Microsoft supporters", "if I say the wrong word I will go to jail", and who raised the ignorant, vacuous spectre of "political correctness" to argue that they should be able to say what they want, in a GNU conference, no matter who they hurt, no matter what the effects. That they are able to argue this position from a status-quo perspective is the definition of privilege.

Now, there is ignorance, and there is malice. Both must be opposed, but the former may find a cure. Although I didn't begin my contribution to the discussion in the smoothest way, linking to a an amusing article on the alchemy of intent that is probably misunderstood, it ended up that one of the main points was about intent. I know Ralph (say) and Ralph is a great person and so how could it be that anything Ralph would say could be a slur? You know he wouldn't mean it like that!

To that, we of course have to say that as GNU grows, not everyone knows that Ralph is a great person. In the end what would it mean for someone to be anti-racist but who says racist things all the time? You would have to call them racist, right? Or if you just said something one time, but refused to own up to your error, and instead persisted in repeating a really racist slur -- you would be racist right? But I know you... But the thing that you said...

But then to be honest I wonder sometimes. If someone repeats a joke trivializing rape culture, after making sure that the microphone is picking up his words -- I mean, that's a misogynist action, right? Put aside the question of whether the person is, in essence, misogynist or not. They are doing misogynist things. How do I know that this person isn't going to do it again, private apology or not?

And how do I know that this community isn't going to permit it again? That remark was made to a room of 40 dudes or so. Not one woman was present. Although there was some discussion afterwards, if people left because of the phrase, it was only two or three. How can we then say that GNU is not a misogynist community -- is not a community that tolerates misogyny?

Given all of this, what do you expect? Do you expect to grow GNU into a larger organization in the future, rich and strong and diverse? If that's not your goal, you are not my colleague. And if it is your goal, why do you put up with this kind of behavior?

The discussion on intent and offense seems to have had its effect in the removal of "offensive or" from the anti-harassment policy language. I think it's terrible, though -- if you don't trust someone who says they were offended by sexual language or imagery, why would you trust them when they report sexual harassment or assault? I can only imagine this leading to some sort of argument where the person who has had the courage to report such an incident finds himself or herself in a public witness box, justifying that the incident was offensive. "I'm sorry my words offended you, but that was not my intent, and anyway the words were not offensive." Lolnope.

There were so many other wrong things about this -- a suggestion that we the GNU cabal (lamentably, a bunch of dudes) should form a committee to make the wording less threatening to us; that we're just friends anyway; that illegal things are illegal anyway... it's as if the Code of Conduct FAQ that Ashe Dryden assembled were a Bingo card and we all lost.

Finally I don't think I trusted the organizers enough with this policy. Both organizers expressed skepticism about the policy in such terms that if I personally hadn't won the privilege lottery (white male "western" hetero already-established GNU maintainer) I wouldn't feel comfortable bringing up a concern to them.

In the future I will not be attending any conferences without strong, consciously applied codes of conduct, and I enjoin you to do the same.

conclusion


Propagandhi, "Refusing to Be a Man", Less Talk, More Rock (1996)

There is no conclusion yet -- working for the better world we all know is possible is a process, as people and as a community.

To outsiders, to outsiders everywhere, please keep up the vocal criticisms. Thank you for telling your story.

To insiders, to insiders everywhere, this is your problem. The problem is you. Own it.

Syndicated 2014-08-18 21:07:17 from wingolog

18 Aug 2014 marnanel   » (Journeyer)

file upload thing

A thought: I keep running into people who need to schlep large files around, bigger than can be sent by email. (One example is someone I know who runs a dictation service for the blind.) So they have to fit things like yousendit or dropbox into their workflow, and often they don't fit as well as they might.

But there doesn't seem to be a free alternative to run on your own server. Today I realised that this could be done fairly easily as an extension to a bug tracker like Bugzilla-- take out most of the fields on the "create a bug" page, optionally add anonymous uploads and quotas, and you're pretty much there. This would be useful enough for me that I might well have a go.

Update: A friend suggests OwnCloud.

This entry was originally posted at http://marnanel.dreamwidth.org/309205.html. Please comment there using OpenID.

Syndicated 2014-08-18 20:11:56 (Updated 2014-08-18 21:21:15) from Monument

18 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: scheduler

This post is in a series covering the discussions at the Juno Nova mid-cycle meetup. This post will cover the current state of play of our scheduler refactoring efforts. The scheduler refactor has been running for a fair while now, dating back to at least the Hong Kong summit (so about 1.5 release cycles ago).

The original intent of the scheduler sub-team's effort was to pull the scheduling code out of Nova so that it could be rapidly iterated on its own, with the eventual goal being to support a single scheduler across the various OpenStack services. For example, the scheduler that makes placement decisions about your instances could also be making decisions about the placement of your storage resources and could therefore ensure that they are co-located as much as possible.

During this process we realized that a big bang replacement is actually much harder than we thought, and the plan has morphed into being a multi-phase effort. The first step is to make the interface for the scheduler more clearly defined inside the Nova code base. For example, in previous releases, it was the scheduler that launched instances: the API would ask the scheduler to find available hypervisor nodes, and then the scheduler would instruct those nodes to boot the instances. We need to refactor this so that the scheduler picks a set of nodes, but then the API is the one which actually does the instance launch. That way, when the scheduler does move out it's not trusted to perform actions that change hypervisor state, and the Nova code does that for it. This refactoring work is under way, along with work to isolate the SQL database accesses inside the scheduler.

I would like to set expectations that this work is what will land in Juno. It has little visible impact for users, but positions us to better solve these problems in Kilo.

We discussed the need to ensure that any new scheduler is at least as fast and accurate as the current one. Jay Pipes has volunteered to work with the scheduler sub-team to build a testing framework to validate this work. Jay also has some concerns about the resource tracker work that is being done at the moment that he is going to discuss with the scheduler sub-team. Since the mid-cycle meetup there has been a thread on the openstack-dev mailing list about similar resource tracker concerns (here), which might be of interest to people interested in scheduler work.

We also need to test our assumption at some point that other OpenStack services such as Neutron and Cinder would be even willing to share a scheduler service if a central one was implemented. We believe that Neutron is interested, but we shouldn't be surprising our fellow OpenStack projects by just appearing with a complete solution. There is a plan to propose a cross-project session at the Paris summit to cover this work.

In the next post in this series we'll discuss possibly the most controversial part of the mid-cycle meetup. The proposal for "slots" for landing blueprints during Kilo.

Tags for this post: openstack juno nova mid-cycle summary scheduler
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Juno nova mid-cycle meetup summary: cells; Michael's surprisingly unreliable predictions for the Havana Nova release

Comment

Syndicated 2014-08-17 20:06:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

17 Aug 2014 dmarti   » (Master)

Original bug, not original sin

Ethan Zuckerman calls advertising The Internet's Original Sin. But sin is overstating it. Advertising has an economic and social role, just as bacteria have an important role in your body. Many kinds of bacteria can live on and around you just fine, and only become a crisis when your immune system is compromised.

The bad news is that the Internet's immune system is compromised. Quinn Norton summed it up: Everything is Broken. The same half-assed approach to security that lets random trolls yell curse words on your baby monitor is also letting a small but vocal part of the ad business claim an unsustainable share of Internet-built wealth at the expense of original content.

But email spam didn't kill email, and surveillance marketing won't kill the Web. Privacy tech is catching up. AdNews has a good piece on the progress of ad blocking, but I'm wondering about how accurate any measurement of ad blocking can be in the presence of massive fraud. Fraudulent traffic is a big part of the picture, and nobody has an incentive to run an ad blocker on that. The results from the combination of fraud and use of privacy tools are unpredictable. Paywalls are the obvious next step, but there are ways for sites to work with privacy tools, not against them.

What Ethan calls pay-for-performance is the smaller, and less valuable, part of advertising. Online ads are stuck in that niche not so much because of original sin, but because of an original bug. When the browsers of Dot-Com Boom 1.0 came out in a rush with support for privacy antifeatures such as third-party tracking, the Web excluded itself from lucrative branding or signaling advertising. The Web became a direct-response medium like email spam or direct mail. Bob Hoffman said, The web is a much better yellow pages and a much worse television. But that's not inherent in the medium. The Web is able to carry better and more signalful ads as the privacy level goes up. That's a matter of fixing the privacy bugs that allow for tracking, not a sin to expiate.

Recent news, from Kate Tummarello at The Hill: Tech giants at odds over Obama privacy bill. Microsoft is coming in on one side, and a group of mostly surveillance marketing firms calling itself the united voice of the Internet economy is on the other. There's no one original sin here, but there's plenty of opportunity in fixing bugs.

Bonus links

Jeff Jarvis: Absolution? Hell, no

Jason Dorrier: Burger Robot Poised to Disrupt Fast Food Industry

BOB HOFFMAN: Confusing Gadgetry With Behavior

Syndicated 2014-08-17 12:21:35 from Don Marti

17 Aug 2014 Skud   » (Master)

17 Aug 2014 rlougher   » (Master)

What No Comments?

Thanks to everybody who commented on the JamVM 2.0.0 release, and apologies it's taken so long to approve them - I was expecting to get an email when I had an unmoderated comment but I haven't received any.

To answer the query regarding Nashorn.  Yes, JamVM 2.0.0 can run Nashorn.  It was one of the things I tested the JSR 292 implementation against.  However, I can't say I ran any particularly large scripts with it (it's not something I have a lot of experience with).  I'd be pleased to hear any experiences (good or bad) you have.

So now 2.0.0 is out of the way I hope to do much more frequent releases.  I've just started to look at OpenJDK 9.  I was slightly dismayed to discover it wouldn't even start up (java -version), but it turned out to be not a lot of work to fix (2 evenings).  Next is the jtreg tests...

Syndicated 2014-08-17 00:49:00 (Updated 2014-08-17 00:49:57) from Robert Lougher

16 Aug 2014 hypatia   » (Journeyer)

July 2014

I took about a week to get over my jetlag from the USA, but it was really rather mild. I would just get on with my day, only as soon as the sun set, the day would be over. The unpleasantness was mostly that this meant that for about a week, I worked and slept and did nothing else.

I met my mother, aunt and sister in Hornsby — where Andrew and I lived for 5 years and where V was born — the Friday afternoon after I got back, which was odd. Of course, most things are exactly the same, but there was also no point to it. We didn’t have friends up there even at the time and there was nowhere to go where people would remember me unless the food sellers in the (very very busy) local shopping centre remember a very tall past customer.

The playground where V played the most was exactly the same, but he’d forgotten it, and it was also verging on being a little young for him. Hornsby Shire Council is not the City of Sydney in terms of devotion to adventure playgrounds. I drove past the hospital to show V where he was born, and realised I’m not sure I’d remember which birth suite it was (even if they were inclined to allow random people to traipse through the delivery ward, which of course they would not be). Hornsby Hospital, which was a state-wide scandal in terms of maintainence, has had some money spent on it in the last few years. The building which I believe contained the old maternity ward my mother was born in has been knocked down and replaced with something in blocky primary colours, looking much like the new Royal North Shore hospital. It shouldn’t surprise me there are trends in hospital design, but it does.

V, of course, was politely puzzled by the idea that he had ever lived in this place or been in that hospital and so on. He also didn’t recognise his old daycare centre.

The block of flats we lived in — we still own the flat — looks exactly the same as when I last saw it more than two years ago, but there’s a speed bump in the street, and a new Thai restaurant where the sad failed grocer was. (Sad both in that any failed small business is sad, and also that they appeared to have sunk a lot of thought and effort into the fitout, trying to set up a slightly fancy deli that was patronised mostly by me and Andrew. They left on what would have been about the first annual review of their lease.) I entirely forgot to check if the Blockbuster franchise was still there; I would assume not.

I had intended to spend most of a day up there remembering things, in the end I drove around and left after 45 minutes.

It was probably that trip that inspired me into a very brief foray into the Sydney property market the following weekend, to wit, inspecting two properties. One we arrived at only to be told by a bored agent it had been bought before its first public inspection. “Yeah, sorry. That’s how it goes!” The other involved real estate agents cornering us to let us know how very motivated (very, very motivated) the seller was, and wanting to have a big discussion about what we were looking for in the market and what we thought about the market and why we thought that and whether they’d be of any help re-aligning our thoughts for us and what kind of finance we might have access to or could be assisted with, and etc, and were not easily put off by “we live just up the street and are having a sticky-beak, and also, this apartment is down two internal flights of stairs and we have a baby in a stroller, so no.” I suppose it could be worse, we could have actually bought the place. But it was surprisingly difficult to get away from them even as entirely unmotivated buyers.

And that was Andrew’s cue to nick off to the USA himself. It was a long and lonely trip at my end, probably much the same as mine for him. As our work trips become increasingly totalising — he was expected to have all three meals a day with work colleagues he needs to know better, I took a baby with me — we’ve dropped off our communications. I spoke to him a couple of times while I was away (and mostly in order to speak to a very bored and slightly bewildered V, at that), I think while he was away we had a couple of abortive attempts at video chat and that was about it. Not much fun having a chat that consists mostly of “… no, I still can’t hear you, oh, I just saw you wave, nope, now you’ve frozen, can you hear me? CAN YOU HEAR ME?” It got even worse when he got to London and didn’t have a local SIM and was impossible to reach at all.

Andrew works in an office, but I don’t, so when he travels I can go for days without having face-to-face interactions with other adults that aren’t transactional. (“Have I paid for V’s dance class this week? No? Here’s the fee!”) So I took V and A to my parents for three nights in the middle of Andrew’s trip. Packing alone for a trip is always really annoying and boring, but the drive that I was dreading (about 4 hours each way) ended up being surprisingly painless. V remains a good and surprisingly non-whingy car traveller and A sleeps even better in cars than he used to. The first morning we were there they had their snowfall of the year; unfortunately we hadn’t brought gloves with us but had a bit of fun anyway, with my parents hauling V around on a tarpaulin “sled”.

Once I was back, I warned Val that I was feeling slightly ill and was having an inexplicably grumpy and sad day. (The amount of emotional work and intimacy required in a small business can be high, but I do like being able to rearrange my day around being grumpy every so often.) It got much more explicable when I realised I was having cellulitis symptoms in my left ankle (an infection of soft tissue under the skin).

I had cellulitis in September 2012 with a slightly unusual and very aggressive presentation: I got a high fever first, about 24 hours before there was any redness or swelling and so on. By the time the redness was even really properly visible, I had been running a 40°C fever for several days, could barely walk due to the painful swelling of lymph nodes, was dehydrated, and was admitted to hospital for 6 days of IV antibiotics (and three days of rehydration, because I refused to take anything by mouth). When I was in there, the infectious diseases registrar asked if she should draw the boundaries of the redness on my leg to check if it was spreading, and the specialist said mildly “I don’t think there’s much point to that.” He was quite right: within a couple more days, the redness had spread all over my left thigh, and I ended up losing two layers of skin from most of my inner thigh, very much (as the specialist pointed out) as if I’d badly burned it. The day before I was discharged, he stopped by my bed alone and remarked that it was cases like this that “remind us that even in the age of antibiotics, these things can be very aggressive, and sometimes even fatal.”

… Thanks.

So, naturally, I panicked that I was having cellulitis symptoms again, only this time with two children in my care and Andrew in London (so, timezone-flipped) and close to unreachable other than by email. It wasn’t, in the end, justified: this time I got the redness and swelling but no fever or systemic illness, and a couple of courses of antibiotics cleared it up without me losing any skin, although I did walk with a cane for a couple of days due to lymph node pain. It was no worse than having twisted an ankle a bit in the end. It was tough on the extended family, as I set up Illness Level Red in case of needing to be hospitalised, unnecessarily in the event. (Andrew and I agreed that he’d arrange to leave London early as soon as I started running a fever, so he ended up leaving as planned.)

As a concrete thing, Andrew and I are going to have to work a bit more about communicating, and being accessible, while each of us travels. I used to talk about emotionally putting our marriage on ice for the duration — which is already much easier for the person who is travelling than the person left behind — but it’s not possibly for parenting, especially if the at-home parent gets taken out of action.

Once Andrew was back, all was well with the world. For the week and a half it took him to incubate the influenza he presumably picked up while travelling, anyway… stay tuned.

Syndicated 2014-08-16 21:29:10 from puzzling.org

16 Aug 2014 Skud   » (Master)

Dinner, aka, too impatient to wait for a real loaf to rise

16 Aug 2014 Skud   » (Master)

Testing Instagram/ifttt/wordpress/DW integration

15 Aug 2014 caolan   » (Master)

dialog conversion status, 4 to go

Converting LibreOffice dialogs to .ui format, 4 left

I should go on vacation more often. On my return I find that Palenik Mihály and Szymon Kłos, two of our GSOC2014 students, have now converted all but 4 of LibreOffice’s classic fixed widget size and position .src format elements to the GtkBuilder .ui format.

Here's the list of the last four. One (monster) whose conversion is in progress, one that should ideally be removed in favour of a duplicate dialog and two that have no known route to display them. Hacking the code temporarily to force those two to appear is probably no biggy.


Current conversion stats are:
820 .ui files currently exist
There are 3 unconverted dialogs
There are 1 unconverted tabpages
An estimated additional 4 .ui are required
We are 99% of the way through.

What's next, well *cough*, the above are all the dialogs and tabpages in the classic .src format. There are actually a host of ErrorBox, InfoBox and QueryBox that exist in the .src format as well.

These take just two pieces of information, a string to display and some bits that set what buttons to show, e.g. cancel, close, ok + cancel, etc. We want to remove them in favour of the Gtk-alike MessageDialog, but we don't want to actually convert them to .ui format, because they are so simple it makes more sense to just reduce them to strings like this sample commit demonstrates. This might even be possible to at least somewhat automate.

I've now updated count-todo-dialogs to display the count of those *Box elements that exist in src file format, but I'll elide the count of them until the last 4 true dialogs+tabpages are gone.

Syndicated 2014-08-15 15:54:00 (Updated 2014-08-15 15:54:42) from Caolán McNamara

15 Aug 2014 Stevey   » (Master)

A tale of two products

This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.

I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.

This entry is about two things.

A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.

When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..

So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.

These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.

Due to the shiny-white walls of the room they're all you need in the dark.

By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:

  • Our TV.
  • Our Sky-box.
  • OUr DVD player.

The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.

I had to resort to installing Windows in a virtual machine to make it run:

# Get the Bus/ID for the USB device
bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0)
id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:)

# pass to kvm
kvm -localtime ..  -usb -device usb-host,hostbus=$bus,hostaddr=$id ..

That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.

I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.

There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.

(Apparently there is MacOS software too, I don't use macs. *shrugs*)

In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.

Syndicated 2014-08-15 12:14:46 from Steve Kemp's Blog

18 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: bug management

Welcome to the next exciting installment of the Nova Juno mid-cycle meetup summary. In the previous chapter, our hero battled a partially complete cells implementation, by using his +2 smile of good intentions. In this next exciting chapter, watch him battle our seemingly never ending pile of bugs! Sorry, now that I'm on to my sixth post in this series I feel like it's time to get more adventurous in the introductions.

For at least the last cycle, and probably longer, Nova has been struggling with the number of bugs filed in Launchpad. I don't think the problem is that Nova has terrible code, it is instead that we have a lot of users filing bugs, and the team working on triaging and closing bugs is small. The complexity of the deployment options with Nova make this problem worse, and that complexity increases as we allow new drivers for things like different storage engines to land in the code base.

The increasing number of permutations possible with Nova configurations is a problem for our CI systems as well, as we don't cover all of these options and this sometimes leads us to discover that they don't work as expected in the field. CI is a tangent from the main intent of this post though, so I will reserve further discussion of our CI system until a later post.

Tracy Jones and Joe Gordon have been doing good work in this cycle trying to get a grip on the state of the bugs filed against Nova. For example, a very large number of bugs (hundreds) were for problems we'd fixed, but where the bug bot had failed to close the bug when the fix merged. Many other bugs were waiting for feedback from users, but had been waiting for longer than six months. In both those cases the response was to close the bug, with the understanding that the user can always reopen it if they come back to talk to us again. Doing "quick hit" things like this has reduced our open bug count to about one thousand bugs. You can see a dashboard that Tracy has produced that shows the state of our bugs at http://54.201.139.117/nova-bugs.html. I believe that Joe has been moving towards moving this onto OpenStack hosted infrastructure, but this hasn't happened yet.

At the mid-cycle meetup, the goal of the conversation was to try and find other ways to get our bug queue further under control. Some of the suggestions were largely mechanical, like tightening up our definitions of the confirmed (we agree this is a bug) and triaged (and we know how to fix it) bug states. Others were things like auto-abandoning bugs which are marked incomplete for more than 60 days without a reply from the person who filed the bug, or unassigning bugs when the review that proposed a fix is abandoned in Gerrit.

Unfortunately, we have more ideas for how to automate dealing with bugs than we have people writing automation. If there's someone out there who wants to have a big impact on Nova, but isn't sure where to get started, helping us out with this automation would be a super helpful way to get started. Let Tracy or I know if you're interested.

We also talked about having more targeted bug days. This was prompted by our last bug day being largely unsuccessful. Instead we're proposing that the next bug day have a really well defined theme, such as moving things from the "undecided" to the "confirmed" state, or similar. I believe the current plan is to run a bug day like this after J-3 when we're winding down from feature development and starting to focus on stabilization.

Finally, I would encourage people fixing bugs in Nova to do a quick search for duplicate bugs when they are closing a bug. I wouldn't be at all surprised to discover that there are many bugs where you can close duplicates at the same time with minimal effort.

In the next post I'll cover our discussions of the state of the current scheduler work in Nova.

Tags for this post: openstack juno nova mi-cycle summary bugs
Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers

Comment

Syndicated 2014-08-17 19:38:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

15 Aug 2014 benad   » (Apprentice)

Client-Side JavaScript Modules

I dislike the JavaScript programming language. I despise Node.js for server-side programming, and people with far more experience than me with both also agree, for example Ted Dziuba in 2011 and more recently Eric Jiang. And while I can easily avoid using Node as a server-side solution, the same cannot be said about avoiding JavaScript altogether.

I recently discovered Atom, a text editor based on a custom build of WebKit, essentially running on HTML, CSS and JavaScript. Though it is far from the fastest text editor out there, it feels like a spiritual successor to my favourite editor, jEdit, but based on modern web technologies rather than Java. The net effect is that Atom seems like the fastest growing text editor, and with its deep integration with Git (it was made by Github), it makes it a breeze to change code.

I noticed a few interesting things that were used to make JavaScript more tolerable in Atom. First, it supports CoffeeScript. Second, it uses Node-like modules.

CoffeeScript was a huge discovery for me. It is essentially a programming language that compiles into JavaScript, and makes JavaScript development more bearable. Its syntax reminds me a bit of the syntax difference between Java and Groovy. There's also the very interesting JavaScript to CoffeeScript converter called js2coffee. I used js2coffee on one of my JavaScript module, and the result was far more readable and manageable.

The problem with CoffeeScript is that you need to integrate its compilation to JavaScript somewhere. It just so happens that its compiler is a command-line JavaScript tool made for Node. A JavaScript equivalent to Makefiles (actually, more like Maven) is called Grunt, and from it you can call the CoffeeScript compiler directly, on top of UglifyJS to make the generated output smaller. All of these tools exist under node_modules/.bin when installed locally using npm, the Node Package Manager.

Also, by writing my module as a Node module (actually, CommonJS), I could also use some dependency management and still deploy it for a web browser's environment using Browserify. I could even go further and integrate it with Jasmine for unit tests, and run them in a GUI-less full-stack browser like PhantomJS, but that's going too far for now, and you're better off reading the Browserify article by Bastian Krol for more information.

It remains that Browserify is kind of a hack that isn't ideal for running JavaScript modules in a browser, as it has to include browser-equivalent functionality that is unique to Node and isn't optimized for high-latency asynchronous loading. A better solution for browser-side JavaScript modules is RequireJS, using the module format AMD. While not all Node modules have an AMD equivalent, the major ones are easily accessible with bower. Interestingly, you can create a module that can as AMD, Node and natively in a browser using the templates called UMD (as in "Universal Module Definition"). Also, RequireJS can support Node modules (that don't use Node-specific functionality) and any other JavaScript library made for browsers, so that you can gain asynchronous loading.

It should be noted that bower, grunt and many command-line JavaScript tools are made for Node and installed locally using npm, So, even if "Node as a JavaScript web server" fails (and it should), using Node as an environment for local JavaScript command-line tools works quite well and could have a great future.

After all is said and done, I now have something that is kind of like Maven, but for JavaScript using Grunt, RequireJS, bower and Jasmine, to download, compile (CoffeeScript), inject and optimize JavaScript modules for deployment. Or you can use something like CodeKit if you prefer a nice GUI. Either way, JavaScript development, for client-side software like Atom, command-line scripts or for the browser, is finally starting to feel reasonable.

Syndicated 2014-08-15 03:14:41 from Benad's Blog

15 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: cells

This is the next post summarizing the Juno Nova mid-cycle meetup. This post covers the cells functionality used by some deployments to scale Nova.

For those unfamiliar with cells, it's a way of combining smaller Nova installations into a thing which feels like a single large Nova install. So for example, Rackspace deploys Nova in cells of hundreds of machines, and these cells form a Nova availability zone which might contain thousands of machines. The cells in one of these deployments form a tree: users talk to the top level of the tree, which might only contain API services. That cell then routes requests to child cells which can actually perform the operation requested.

There are a few reasons why Rackspace does this. Firstly, it keeps the MySQL databases smaller, which can improve the performance of database operations and backups. Additionally, cells can contain different types of hardware, which are then partitioned logically. For example, OnMetal (Rackspace's Ironic-based baremetal product) instances come from a cell which contains OnMetal machines and only publishes OnMetal flavors to the parent cell.

Cells was originally written by Rackspace to meet its deployment needs, but is now used by other sites as well. However, I think it would be a stretch to say that cells is commonly used, and it is certainly not the deployment default. In fact, most deployments don't run any of the cells code, so you can't really call them even a "single cell install". One of the reasons cells isn't more widely deployed is that it doesn't implement the entire Nova API, which means some features are missing. As a simple example, you can't live-migrate an instance between two child cells.

At the meetup, the first thing we discussed regarding cells was a general desire to see cells finished and become the default deployment method for Nova. Perhaps most people end up running a single cell, but in that case at least the cells code paths are well used. The first step to get there is improving the Tempest coverage for cells. There was a recent openstack-dev mailing list thread on this topic, which was discussed at the meetup. There was commitment from several Nova developers to work on this, and notably not all of them are from Rackspace.

It's important that we improve the Tempest coverage for cells, because it positions us for the next step in the process, which is bringing feature parity to cells compared with a non-cells deployment. There is some level of frustration that the work on cells hasn't really progressed in Juno, and that it is currently incomplete. At the meetup, we made a commitment to bringing a well-researched plan to the Kilo summit for implementing feature parity for a single cell deployment compared with a current default deployment. We also made a commitment to make cells the default deployment model when this work is complete. If this doesn't happen in time for Kilo, then we will be forced to seriously consider removing cells from Nova. A half-done cells deployment has so far stopped other development teams from trying to solve the problems that cells addresses, so we either need to finish cells, or get out of the way so that someone else can have a go. I am confident that the cells team will take this feedback on board and come to the summit with a good plan. Once we have a plan we can ask the whole community to rally around and help finish this effort, which I think will benefit all of us.

In the next blog post I will cover something we've been struggling with for the last few releases: how we get our bug count down to a reasonable level.

Tags for this post: openstack juno nova mid-cycle summary cells
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy

Comment

Syndicated 2014-08-14 21:20:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

15 Aug 2014 mikal   » (Journeyer)

More bowls and pens

The pens are quite hard to make by the way -- the wood is only a millimeter or so thick, so it tends to split very easily.

         

Tags for this post: wood turning 20140805-woodturning photo

Comment

Syndicated 2014-08-14 19:35:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Aug 2014 danstowell   » (Journeyer)

Jabberwocky, ATP, and London

Wow. The Jabberwocky festival, organised by the people who did many amazing All Tomorrow's Parties festivals, collapsed three days before it was due to happen, this weekend. The 405 has a great article about the whole sorry mess.

We've been to loads of ATPs and I was thinking about going to Jabberwocky. Really tempted by the great lineup and handily in London (where I live). But the venue? The Excel Centre? A convention-centre box? I couldn't picture it being fun. The promoters tried to insist that it was a great idea for a venue, but it seems I was probably like a lot of people thinking "nah". (Look at the reasons they give, crap reasons. No-one ever complained at ATP about the bar queues or the wifi coverage. The only thing I complained about was that the go-karting track was shut!) I've seen a lot of those bands before, too, it's classic ATP roster, so if the place isn't a place I want to go to then there's just not enough draw.

That 405 article mentions an early "leak" of plans that they were aiming to hold it in the Olympic Park. Now that would have been a place to hold it. Apparently the Olympic Park claimed ignorance, saying they never received a booking, but that sounds like PR-speak pinpointing that they were in initial discussions but didn't take it further. I would imagine that the Olympic Park demanded a much higher price than Excel since they have quite a lot of prestige and political muscle - or maybe it was just an issue of technical requirements or the like. But the Jabberwocky organisers clearly decided that they'd got the other things in place (lineup etc) so they'd press ahead with London in some other mega-venue, and hoped that the magic they once weaved on Pontins or Butlins would happen in the Excel.

This weekend there will be lots of great Jabberwocky fall-out gigs across London. That's totally weird. And I'm sorry I won't be in London to catch any of them! But it's very very weird because it's going to be about 75% of the festival, but converted from a monolithic one into one of those urban multi-venue festivals. The sickening thing about that is that even though the organisers clearly cocked some stuff up royally, I still feel terrible for them having to go bust and get no benefit from the neat little urban fallout festival they've accidentally organised. Now if ATP had decided to run it that way, I would very likely have signed up for it, and dragged my mates down to London!

Syndicated 2014-08-14 04:50:27 from Dan Stowell

15 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: DB2 support

This post is one part of a series discussing the OpenStack Nova Juno mid-cycle meetup. It's a bit shorter than most of the others, because the next thing on my list to talk about is DB2, and that's relatively contained.

IBM is interested in adding DB2 support as a SQL database for Nova. Theoretically, this is a relatively simple thing to do because we use SQLAlchemy to abstract away the specifics of the SQL engine. However, in reality, the abstraction is leaky. The obvious example in this case is that DB2 has different rules for foreign keys than other SQL engines we've used. So, in order to be able to make this change, we need to tighten up our schema for the database.

The change that was discussed is the requirement that the UUID column on the instances table be not null. This seems like a relatively obvious thing to allow, given that UUID is the official way to identify an instance, and has been for a really long time. However, there are a few things which make this complicated: we need to understand the state of databases that might have been through a long chain of upgrades from previous Nova releases, and we need to ensure that the schema alterations don't cause significant performance problems for existing large deployments.

As an aside, people sometimes complain that Nova development is too slow these days, and they're probably right, because things like this slow us down. A relatively simple change to our database schema requires a whole bunch of performance testing and negotiation with operators to ensure that its not going to be a problem for people. It's good that we do these things, but sometimes it's hard to explain to people why forward progress is slow in these situations.

Matt Riedemann from IBM has been doing a good job of handling this change. He's written a tool that operators can run before the change lands in Juno that checks if they have instance rows with null UUIDs. Additionally, the upgrade process has been well planned, and is documented in the specification available on the fancy pants new specs website.

We had a long discussion about this change at the meetup, and how it would impact on large deployments. Both Rackspace and HP were asked if they could run performance tests to see if the schema change would be a problem for them. Unfortunately HP's testing hardware was tied up with another project, so we only got numbers from Rackspace. For them, the schema change took 42 minutes for a large database. Almost all of that was altering the column to be non-nullable; creating the new index was only 29 seconds of runtime. However, the Rackspace database is large because they don't currently purge deleted rows, if they can get that done before running this schema upgrade then the impact will be much smaller.

So the recommendation here for operators is that it is best practice to purge deleted rows from your databases before an upgrade, especially when schema migrations need to occur at the same time. There are some other takeaways for operators as well: if we know that operators have a large deployment, then we can ask if an upgrade will be a problem. This is why being active on the openstack-operators mailing list is important. Additionally, if operators are willing to donate a dataset to Turbo-Hipster for DB CI testing, then we can use that in our automation to try and make sure these upgrades don't cause you pain in the future.

In the next post in this series I'll talk about the future of cells, and the work that needs to be done there to make it a first class citizen.

Tags for this post: openstack juno nova mid-cycle summary sql database sqlalchemy db2
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Exploring a single database migration; Time to document my PDF testing database

Comment

Syndicated 2014-08-14 19:20:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Aug 2014 mikal   » (Journeyer)

Review priorities as we approach juno-3

I just send this email out to openstack-dev, but I am posting it here in case it makes it more discoverable to people drowning in email:

To: openstack-dev
Subject: [nova] Review priorities as we approach juno-3

Hi.

We're rapidly approaching j-3, so I want to remind people of the
current reviews that are high priority. The definition of high
priority I am using here is blueprints that are marked high priority
in launchpad that have outstanding code for review -- I am sure there
are other reviews that are important as well, but I want us to try to
land more blueprints than we have so far. These are listed in the
order they appear in launchpad.

== Compute Manager uses Objects (Juno Work) ==

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z

This is ongoing work, but if you're after some quick code review
points they're very easy to review and help push the project forward
in an important manner.

== Move Virt Drivers to use Objects (Juno Work) ==

I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?

== Add a virt driver for Ironic ==

This one is in progress, but we need to keep going at it or we wont
get it merged in time.

* https://review.openstack.org/#/c/111223/ was approved, but a rebased
ate it. Should be quick to re-approve.
* https://review.openstack.org/#/c/111423/
* https://review.openstack.org/#/c/111425/
* ...there are more reviews in this series, but I'd be super happy to
see even a few reviewed

== Create Scheduler Python Library ==

* https://review.openstack.org/#/c/82778/
* https://review.openstack.org/#/c/104556/

(There are a few abandoned patches in this series, I think those two
are the active ones but please correct me if I am wrong).

== VMware: spawn refactor ==

* https://review.openstack.org/#/c/104145/
* https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
seems procedural to me)
* https://review.openstack.org/#/c/105738/
* ...another chain with many more patches to review

Thanks,
Michael


The actual email thread is at http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html.

Tags for this post: openstack juno review nova ptl
Related posts: Juno Nova PTL Candidacy; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers; Juno nova mid-cycle meetup summary: social issues; More reviews

Comment

Syndicated 2014-08-14 13:01:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

13 Aug 2014 bagder   » (Master)

I’m with Firefox OS!

Tablet

I have received a Firefox OS tablet as part of a development program. My plan is to use this device to try out stuff I work on and see how it behaves on Firefox OS “for real” instead of just in emulators or on other systems. While Firefox OS is a product of my employer Mozilla, I personally don’t work particularly much with Firefox OS specifically. I work on networking in general for Firefox, and large chunks of the networking stack is used in both the ordinary Firefox browser like on desktops as well as in Firefox OS. I hope to polish and improve networking on Firefox OS too over time.

Firefox OS tablet

Phone

The primary development device for Firefox OS is right now apparently the Flame phone, and I have one of these too now in my possession. I took a few photos when I unpacked it and crammed them into the same image, click it for higher res:

Flame - Firefox OS phone

A brief explanation of Firefox OS

Firefox OS is an Android kernel (including drivers etc) and a bionic libc – simply the libc that Android uses. Linux-wise and slightly simplified, it runs a single application full-screen: Firefox, which then can run individual Firefox-apps that appears as apps on the phone. This means that the underlying fundamentals are shared with Android, while the layers over that are Firefox and then a world of HTML and javascript. Thus most of the network stack used for Firefox – that I work with – the http, ftp, dns, cookies and so forth is shared between Firefox for desktop and Firefox for Android and Firefox OS.

Firefox OS is made to use a small footprint to allow cheaper smartphones than Android itself can. Hence it is targeted to developing nations and continents.

Both my devices came with Firefox OS version 1.3 pre-installed.

The phone

The specs: Qualcomm Snapdragon 1.2GHZ dual-core processor, 4.5-inch 854×480 pixel screen, five-megapixel rear camera with auto-focus and flash, two-megapixel front-facing camera. Dual-SIM 3G, 8GB of onboard memory with a microSD slot, and a 1800 mAh capacity battery.

The Flame phone should be snappy enough although at times it seems to take a moment too long to populate a newly shown screen with icons etc. The screen surface is somehow not as smooth as my Nexus devices (we have the 4,5,7,10 nexuses in the house), leaving me with a constant feeling the screen isn’t cleaned.

Its dual-sim support is something that seems ideal for traveling etc to be able to use my home sim for incoming calls but use a local sim for data and outgoing calls… I’ve never had a phone featuring that before. I’ve purchased a prepaid SIM-card to use with this phone as my secondary device.

Some Good

I like the feel of the tablet. It feels like a solid and sturdy 10″ tablet, just like it should. I think the design language of Firefox OS for a newbie such as myself is pleasing and good-looking. The quad-core 1GHz thing is certainly fast enough CPU-wise to eat most of what you can throw at it.

These are really good devices to do web browsing on as the browser is a highly capable and fast browser.

Mapping: while of course there’s Google maps app, using the openstreetmap map is great on the device and Google maps in the browser is also a perfectly decent way to view maps. Using openstreetmap also of course has the added bonus that it feels great to see your own edits in your own neck of the woods!

I really appreciate that Mozilla pushes for new, more and better standardized APIs to enable all of this to get done in web applications. To me, this is one of the major benefits with Firefox OS. It benefits all of us who use the web.

Some Bad

Firefox OS feels highly US-centric (which greatly surprised me, seeing the primary markets for Firefox OS are certainly not in the US). As a Swede, I of course want my calendar to show Monday as the first day of the week. No can do. I want my digital clock to show me the time using 24 hour format (the am/pm scheme only confuses me). No can do. Tiny teeny details in the grand scheme of things, yes, but annoying. Possibly I’m just stupid and didn’t find how to switch these settings, but I did look for them on both my devices.

The actual Firefox OS system feels like a scaled-down Android where all apps are simpler and less fancy than Android. There’s a Facebook “app” for it that shows Facebook looking much crappier than it usually does in a browser or in the Android app – although on the phone it looked much better than on the tablet for some reason that I don’t understand.

I managed to get the device to sync my contacts from Google (even with my google 2-factor auth activated) but trying to sync my Facebook contacts just gave me a very strange error window in spite of repeated attempts, but again that worked on my phone!

I really miss a proper back button! Without it, we end up in this handicapped iphone-like world where each app has to provide a back button in its own UI or I have to hit the home button – which doesn’t just go back one step.

The tablet supports a gesture, pull up from the button of the screen, to get to the home screen while the phone doesn’t support that but instead has a dedicated home button which if pressed a long time shows up cards with all currently running apps. I’m not even sure how to do that latter operation on the tablet as it doesn’t’ have a home button.

The gmail web interface and experience is not very good on either of the devices.

Building Firefox OS

I’ve only just started this venture and dipped my toes in that water. All code is there in the open and you build it all with open tools. I might get back on this topic later if I get the urge to ventilate something from it… :-) I didn’t find any proper device specific setup for the tablet, but maybe I just don’t know its proper code word and I’ve only given it a quick glance so far. I’ll do my first builds and installs for the phone. Any day now!

More

My seven year old son immediately found at least one game on my dev phone (he actually found the market and downloaded it all by himself the first time he tried the device) that he really likes and now he wants to borrow this from time to time to play that game – in competition with the android phones and tablets we have here already. A pretty good sign I’d say.

Firefox OS is already a complete and competent phone operating system and app ecosystem. If you’re not coming from Android or Iphone it is a step up from everything else. If you do come from Android or Iphone I think you have to accept that this is meant for the lower end spectrum of smart-phones.

I think the smart-phone world can use more competition and Firefox OS brings exactly that.

firefox-os-bootscreen

Syndicated 2014-08-13 21:53:52 from daniel.haxx.se

13 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

    h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

  

Now I’ll walk through some the CSS code from my toy browser engine, robinson.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

    struct Stylesheet {
    rules: Vec<Rule>,
}

  

A rule includes one or more selectors separated by commas, followed by a list of declarations enclosed in braces.

    struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,
}

  

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

    enum Selector {
    Simple(SimpleSelector),
}

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,
}

  

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

    struct Declaration {
    name: String,
    value: Value,
}

  

My toy engine supports only a handful of CSS’s many value types.

    enum Value {
    Keyword(String),
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here
}

enum Unit { Px, /* insert more units here */ }

  

All other CSS syntax is unsupported, including at-rules, comments, and any selectors/values/units not mentioned above.

Parsing

CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

        /// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
    fn parse_simple_selector(&mut self) -> SimpleSelector {
        let mut result = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
        while !self.eof() {
            match self.next_char() {
                '#' => {
                    self.consume_char();
                    result.id = Some(self.parse_identifier());
                }
                '.' => {
                    self.consume_char();
                    result.class.push(self.parse_identifier());
                }
                '*' => {
                    // universal selector
                    self.consume_char();
                }
                c if valid_identifier_char(c) => {
                    result.tag_name = Some(self.parse_identifier());
                }
                _ => break
            }
        }
        result
    }

  

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.

Specificity

Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

    pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        // http://www.w3.org/TR/selectors/#specificity
        let Simple(ref simple) = *self;
        let a = simple.id.iter().len();
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)
    }
}

  

[If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.]

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

        /// Parse a rule set: `<selectors> { <declarations> }`.
    fn parse_rule(&mut self) -> Rule {
        Rule {
            selectors: self.parse_selectors(),
            declarations: self.parse_declarations()
        }
    }

    /// Parse a comma-separated list of selectors.
    fn parse_selectors(&mut self) -> Vec<Selector> {
        let mut selectors = Vec::new();
        loop {
            selectors.push(Simple(self.parse_simple_selector()));
            self.consume_whitespace();
            match self.next_char() {
                ',' => { self.consume_char(); }
                '{' => break, // start of declarations
                c   => fail!("Unexpected character {} in selector list", c)
            }
        }
        // Return selectors with highest specificity first, for use in matching.
        selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));
        selectors
    }

  

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with a “real” parser built on something like rust-peg.

Exercises

As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.

Shortcuts

Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To be continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some upcoming articles. I’ll keep them coming as fast as I can!

Syndicated 2014-08-13 19:30:00 from Matt Brubeck

13 Aug 2014 pipeman   » (Journeyer)

Configuring smart card login on OS X 10.9

Earlier I documented how to use a Finnish government issued ID card (FINeID) for SSH authentication. As my vacation ended and I had to dig the smart card reader out to SSH to a machine, I remembered that I never quite figured out how to get login authentication to work with the same card. It took a bit of detective work but it turns out the basic steps are not that complicated. I will only cover the most basic set-up, where you pair one specific smart card with a local account on your computer using the card's public key. It's possible to have more sophisticated setup for larger organisations.

First, check my previous post and follow the instructions for how to set up OpenSC and verify using pkcs15-tool -k that your card reader and card is working properly.

Then, in case you have Apple ID's associated with your user account, you need to work around a bug in authorizationhost: in System Preferences, go to Users & Groups and select the user you're setting up for smart cart login. Remove all associated Apple ID accounts by clicking on the "Change…" button next to "Apple ID:" and deleting any entries from the list (if any). Failure to do so may make it impossible to unlock the screen and unlock System Preferences panes. You can also manually do this with Directory Utility by removing all entries except the one containing the username from the user's RecordName property in the Users directory.


Once that is done, run the following to enable smart card support for logins:

sudo security authorizationdb smartcard enable

Make sure the card is inserted, and list the public key hashes using the OS X built-in command sc_auth:
sc_auth hash

It should output a list similar to this, but with slightly more random hashes:

01DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF todentamis- ja salausavain
02DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF allekirjoitusavain
03DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.systemdefault
04DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.kerberos.kdc
05DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.systemdefault
06DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF com.apple.kerberos.kdc
07DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF Imported Private Key


Again, it's the todentamis- ja salausavain we're interested in. Now use sc_auth to associate that public key with a user account:

sudo sc_auth accept -u USERNAME -h 01DEADBEEF00DEADBEEF00DEADBEEF00DEADBEEF


This should be it - when the smart cart is initialised, the corresponding user will automatically be selected in the login screen, and instead of prompting for a password it will prompt you for the card's PIN. Note that typically the card PIN defaults to a 4-digit number but it can be changed to (in the case of a FINeID card) any 4-8 character alphanumeric string using e.g. pkcs15-tool --change-pin. For other cards you can inspect the PIN code constraints using pkcs15-tool --list-pins.

When logging in using a smart card rather than a password, OS X will not be able to unlock your login keychain, as it by default is encrypted using your login password. You can choose to either manually unlock the keychain or change the keychain to use your smart card for unlocking rather than a password. If you do that, it means that your keychain is effectively encrypted with your smart card, so if you lose your smart card, you will lose access to your login keychain. It seems that Keychain migration uses your smartcard PIN as your new keychain password, so beware that you may actually lower the keychain encryption key entropy if your smartcard PIN is simpler than your regular password.

If you have FileVault full disk encryption enabled (and you should) OS X will automatically log you in using the password supplied at the FileVault login screen. If you have followed the instructions above, your account will still have a valid password (it's possible to disable password login entirely by deleting the "ShadowHash" entry in the AuthenticationAuthority record of your user account using Directory Utility - note that this will also effectively disable sudo for that user) and you will be automatically logged in, but the system will not be able to unlock your keychain with that password. To prevent automatic login with FileVault, you can run:


sudo defaults write /Library/Preferences/com.apple.loginwindow DisableFDEAutoLogin -bool YES


More information in HT5989.

If you know French, this blog post contains some more details on configuring smart card authentication on Mavericks.

13 Aug 2014 hypatia   » (Journeyer)

USA, June 2014

Before I left for the US in June, Val asked me what other people were saying to me about my plan to go on an intercontinental business trip and bring a baby, and I said that I gathered that people thought both that it was a terrible idea and that it was fairly typical of me to attempt it.

It was touch and go committing to it. Just when I started to get excited about it, A went through a non-sleeping patch over Easter that nearly saw me walk away from the whole thing. So after that I mostly dealt with it by ignoring it as much as possible until the time was nearly upon me, much as I deal with the entire idea of long haul travel generally.

In fact the trip over started quite promisingly, sitting in Air New Zealand’s nearly deserted business lounge looking out onto the tarmac and feeling a kind of peace and happiness I very rarely feel. (So rarely that I can remember most other cases of it. The afternoon after I finished my final high school exams. Flying back from Honolulu last year finishing up my PhD revisions. I usually need to be alone, and finishing something very big, neither of which was true in this case.)

I like Air New Zealand’s schedule to the States compared to Qantas’s. To fly Qantas to the Bay Area, you fly to LA, which takes about 15 hours, and get off the plane at some point between midnight and about 3am Sydney time, ie, just when your body was finally about to fall asleep. Instead of sleeping, you must navigate LAX. I’ve had nightmares that are more fun than that, even though LAX has usually been rather kind to me if anything. However kind, last year I arrived in San Francisco without a moment of sleep (and pregnant, and ill). On Air NZ, the long flight is the second flight: Auckland to San Francisco, so it more nearly corresponds with my sleeping time.

The question was always whether the baby would sleep at all during the flight, and actually she did surprisingly well considering how ill-designed her location was. They had her staring straight up into a light! Nightie night! I did OK too, although the trip’s high point in the lounge was quickly followed by its low point when I subluxated my shoulder in the middle of the night shutting a window shade (yes really, I attempted it from a terrible angle, but yikes) while located something like 2000km from the nearest hospital (and 10km in the air). But I only had to spend a couple of moments imagining the horror of finding some doctor on the plane to attempt to reset it before it reset itself. The whole thing gave me a new appreciation of fear of flying, as the plane bumped along held up by thin, cold air with me stuck inside it with a busted shoulder. I don’t experience fear of flying, but I increasingly think I probably ought to.

The border official at San Francisco looked a bit skeptical that I was bringing the baby on a business trip, but duly admitted me for business and her as a tourist. And then it was déjà vu all the way out thought the ceiling height metal arrival doors and into and through the waiting groups. I’ve flown into San Francisco internationally only once in the past — my first big overseas trip in 2004 — and so I quite vividly remembered the entire experience. Luckily this time I didn’t have to head out to BART and try and work out SF’s bus system without any sleep (in 2004 I had never been in the northern hemisphere before and didn’t know that I would constantly confuse north and south, thus catching a bus for half an hour in the wrong direction). This time, too, I had a baby with me. Quite a change. I went outside and Suki met me with her car and we loaded A into the car seat and we were away.

It’s always summer in SF when I go there, and for once it really felt like it. Our first night, I went to a long dinner at Amelia’s house. Everyone was pleasingly impressed with my ability to stay awake, but I was playing on easy mode: it was only about 2pm in Sydney. The next day I had lunch at Sanraku at the Metreon because somehow my SF experiences seem to always involve the Metreon, visited Double Union, had coffee with K nearby and dinner with James at Mission Beach Cafe. All with A strapped to my front. (Actually, not strictly true, I put her on the floor at Double Union!) Too many appointments; I should never visit SF just for two nights, it needs to be a week or not at all.

The idea of getting back on a plane the next day was abhorrent, but I just gritted my teeth and did it. In any case, it was only to Portland. I am too used to thinking of Australia as a uniquely large country and therefore had been surprised that we weren’t driving to Portland. Aren’t all foreign cities an hour’s drive apart at most? No. Portland is about 9 hours, it seems, from SF, so much like Sydney and Melbourne or Brisbane. I was also disappointed that it was still about another 5 hours north to Canada, or I would have gone for a day trip.

I was in Portland for eight nights. It was good to settle into a routine there. A adapted really well to the new time and slept much better than she had been doing in Sydney, or has done since. I think it was due to the solstice, which occurred while we were there. Sleeping through the night is much more likely when someone lops four or more hours off the night for you. She sleeps from 6pm here, but in Portland she was staying up past 9.

I hadn’t remembered about Powell’s until Chally reminded me before I left, and in any event I didn’t really appreciate what Powell’s is. It’s a bookstore. A bookstore that occupies a couple of city blocks. It is a good thing that my 16 year old self never got anywhere near it or I might still be living in there. Sadly, it is not quite as magical with a grumpy 8kg human heater strapped to my chest, so I only mounted a couple of special purpose expeditions in, after books I’d been meaning to get for a while. A shame, considering I was only staying a couple of blocks away.

The trip was mostly work. I hope some time I can justify spending some time in the USA that isn’t work-related. (Right now, because V hates it when I travel, I don’t really feel good about travelling for leisure without him.) We arrived Portland on Thursday, had the AdaCamp reception Friday, the Camp itself Saturday and Sunday, Open Source Bridge Tuesday to Thursday, and then I left Portland Friday for Sydney.

I decided to keep things simple while I was there by not having A eating any food, or taking any bottles or pumping supplies, which did mean I was at her beck and call during AdaCamp (which she spent with a child carer) and otherwise I always had her with me. But she was in an exceptionally good mood for essentially the entire trip. Val pointed out that she has a particular trick for interacting with people, which is that she blankly stares at people before smiling at them, giving the impression that she chose to smile especially for them. She made lots and lots of friends. She seems quite outgoing, like her brother. I was sad she couldn’t stay at Open Source Bridge forever, but she couldn’t, what with it only going for a week. (And honestly, I had trouble with just that. I was very tired by that point.)

I liked Portland, but I didn’t feel I got to grips with it. Perhaps the closest was the bus ride out to Selena’s place and back in, looking at the big wooden houses and the massive bright green leafy trees. It’s not a very large city: suburbs full of detached houses can be found within 15 minutes bus ride of downtown. I’m sure they were all ludicrously expensive, but all the same, it had something of a distinct feel to it, so I felt I knew the city a little bit. Another moment of note was that on the bus back, which was exceptionally crowded, the bus driver insisted that someone give me a seat (because A was strapped to me) and didn’t move the bus until they did so. It didn’t at all remind me of SF’s Muni, nor Sydney Buses for that matter.

Val told me that this is the deceptive time of year in Portland, the time when it seems very very liveable. I can believe it, on the 45th parallel. Summertime is long dusks and companionship. Winter is… I’m not sure. I’ve never lived that far from the equator.

A’s one bad time of the trip was on the flight from Portland to SF. She screamed continuously for much of the flight. The man across the aisle from me stuffed his fingers in his ears. I think they may have even messed with the oxygen levels, because everyone around me went to sleep and I had tears pouring down my face from yawning. A did sleep, but it took a while. The wait in SF airport was also no fun — other than a very interesting exhibit of lace in the museum area — most things were closed, and I stabbed my finger hard on a safety pin (not safe enough, it seems). But A was a perfect angel from SF to Auckland; the crew came by to coo over the soundless baby several times. And at Sydney V was very excited to see us and begin the whole fortnight he was to have… before Andrew’s work trip to the US.

Syndicated 2014-08-13 11:24:23 from puzzling.org

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: ironic

Welcome to the third in my set of posts covering discussion topics at the nova juno mid-cycle meetup. The series might never end to be honest.

This post will cover the progress of the ironic nova driver. This driver is interesting as an example of a large contribution to the nova code base for a couple of reasons -- its an official OpenStack project instead of a vendor driver, which means we should already have well aligned goals. The driver has been written entirely using our development process, so its already been reviewed to OpenStack standards, instead of being a large code dump from a separate development process. Finally, its forced us to think through what merging a non-trivial code contribution should look like, and I think that formula will be useful for later similar efforts, the Docker driver for example.

One of the sticking points with getting the ironic driver landed is exactly how upgrade for baremetal driver users will work. The nova team has been unwilling to just remove the baremetal driver, as we know that it has been deployed by at least a few OpenStack users -- the largest deployment I am aware of is over 1,000 machines. Now, this unfortunate because the baremetal driver was always intended to be experimental. I think what we've learnt from this is that any driver which merges into the nova code base has to be supported for a reasonable period of time -- nova isn't the right place for experiments. Now that we have the stackforge driver model I don't think that's too terrible, because people can iterate quickly in stackforge, and when they have something stable and supportable they can merge it into nova. This gives us the best of both worlds, while providing a strong signal to deployers about what the nova team is willing to support for long periods of time.

The solution we came up with for upgrades from baremetal to ironic is that the deployer will upgrade to juno, and then run a script which converts their baremetal nodes to ironic nodes. This script is "off line" in the sense that we do not expect new baremetal nodes to be launchable during this process, nor after it is completed. All further launches would be via the ironic driver.

These nodes that are upgraded to ironic will exist in a degraded state. We are not requiring ironic to support their full set of functionality on these nodes, just the bare minimum that baremetal did, which is listing instances, rebooting them, and deleting them. Launch is excluded for the reasoning described above.

We have also asked the ironic team to help us provide a baremetal API extension which knows how to talk to ironic, but this was identified as a need fairly late in the cycle and I expect it to be a request for a feature freeze exception when the time comes.

The current plan is to remove the baremetal driver in the Kilo release.

Previously in this post I alluded to the review mechanism we're using for the ironic driver. What does that actually look like? Well, what we've done is ask the ironic team to propose the driver as a series of smallish (500 line) changes. These changes are broken up by functionality, for example the code to boot an instance might be in one of these changes. However, because of the complexity of splitting existing code up, we're not requiring a tempest pass on each step in the chain of reviews. We're instead only requiring this for the final member in the chain. This means that we're not compromising our CI requirements, while maximizing the readability of what would otherwise be a very large review. To stop the reviews from merging before we're comfortable with them, there's a marker review at the beginning of the chain which is currently -2'ed. When all the code is ready to go, I remove the -2 and approve that first review and they should all merge together.

In the next post I'll cover the state of adding DB2 support to nova.

Tags for this post: openstack juno nova mid-cycle summary ironic
Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts

Comment

Syndicated 2014-08-14 01:49:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

13 Aug 2014 proclus   » (Master)

Cheap and real worldwide fans

Loading Full Details

Buy GUARANTEED FB Likes/Fans,
at the cheapest prices and become more popular!

- Safely delivered, absolutely no risk!
- Order processed within 24 hours or less!
- Manually Promoted by experts not bots/software involved
- High-Quality service.
- No Turkish likes, only worldwide

We offer different packages at different prices.
Ex: 2000 fans for only 24.99 usd


For Full Details please read the attached .html file











Unsubscribe option available on the footer of our website

Syndicated 2026-08-13 19:08:00 (Updated 2014-08-13 02:54:38) from proclus

13 Aug 2014 bradfitz   » (Master)

hi

Posting from the iPhone app.

Maybe I'm unblocked now.

Syndicated 2014-08-13 01:57:19 from Brad Fitzpatrick

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: containers

This is the second in my set of posts discussing the outcomes from the OpenStack nova juno mid-cycle meetup. I want to focus in this post on things related to container technologies.

Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn't feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.

The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using "check experimental". This hasn't been implemented yet, but will be advertised when it is ready. Once we've seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.

We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven't started talking about specifics for when this driver will return to the nova code base, but I think at this stage we're talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we'll decide whether to merge it back into nova then.

There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:

Nova will continue to support "lowest common denominator containers": by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.

The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn't already have one, or being able to declare an existing instance to be "full" and then create a new one for the next incremental container. These are interesting design issues, and I'd like to see them explored more in a specification.

This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don't think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.

Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.

Tags for this post: openstack juno nova mid-cycle summary containers docker lxc
Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: social issues; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts

Comment

Syndicated 2014-08-14 01:17:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

12 Aug 2014 proclus   » (Master)

uuidgen

I really like uuidgen. Highly recommended.
This is what it makes: f474ea5a-17eb-408a-abe3-d6680a4f6fba


Regards,
proclus
http://www.gnu-darwin.org/

Syndicated 2014-08-12 21:27:00 (Updated 2014-08-12 21:23:03) from proclus

12 Aug 2014 proclus   » (Master)

GDAG

GNU-Darwin Action Group is operated by The GNU-Darwin Distribution.
All suggestions welcome. Free is a verb!

Regards,
proclus
http://www.gnu-darwin.org/

Syndicated 2014-08-12 21:23:00 (Updated 2014-08-12 21:19:52) from proclus

12 Aug 2014 proclus   » (Master)

GNU-Darwin Action Group notifications

For Action Group notifications, stay tuned to this channel.
Status updates are available from the following sources.

https://twitter.com/gnudarwin
https://hotpump.net/gnudarwin
https://identi.ca/proclus

Regards,
Michael L. Love (proclus)

Syndicated 2014-08-12 19:55:00 (Updated 2014-08-12 19:51:07) from proclus

12 Aug 2014 Rich   » (Master)

O Captain, my Captain

O Captain, my Captain is the poem that got me started reading Walt Whitman - one of many works mentioned in Dead Poets Society that got me reading particular authors. Not exactly Whitman's most cheerful work.

Mom used to tell stories of her grandma Nace (my great grandmother) throwing apples at crazy old Walt Whitman as he went for his daily walk near his home in Camden, The kids of the town thought that he was a crazy old man. But he was a man who took his personal tragedies - mostly having to do with his brothers - and turned them into beauty, poetry, and a lifetime of service to the wounded of the civil war.

And now, when so many people are quoting "O Captain, My Captain" in reference to Robin Williams, I have to wonder if they've read past the first line - a deeply tragic poem about the death of President Lincoln, in which he is imagined as a ship captain who doesn't quite make it into harbor, after his great victories. Chillingly apropos of yesterday's tragic end to the brilliant career of Robin Williams.

Exult O shores, and ring O bells!
But I with mournful tread,
Walk the deck my Captain lies,
Fallen cold and dead.

Syndicated 2014-08-12 12:45:30 from Notes In The Margin

12 Aug 2014 Skud   » (Master)

The Pathway to Inclusion

Lately I’ve been working on how to make groups, events, and projects more inclusive. This goes beyond diversity — having a demographic mix of participants — and gets to the heart of how and why people get involved, or don’t get involved, with things.

As I see it, there are six steps everyone needs to pass through, to get from never having heard of a thing to being deeply involved in it.

pathway to inclusion - see below for transcript and more details

These six steps happen in chronological order, starting from someone who knows nothing about your thing.

Awareness

“I’ve heard of this thing.” Perhaps I’ve seen mention of it on social media, or heard a friend talking about it. This is the first step to becoming involved: I have to be aware of your thing to move on to the following stages.

Understanding

“I understand what this is about.” The next step is for me to understand what your thing is, and what it might be like for me to be involved. Here’s where you get to be descriptive. Anything from your thing’s name, to the information on the website, to the language and visuals you use in your promotional materials can help me understand.

Identification

“I can see myself doing this.” Once I understand what your thing is, I’ll make a decision about whether or not it’s for me. If you want to be inclusive, your job here is to make sure that I can imagine myself as part of your group/event/project, by showing how I could use or benefit from what it offers, or by showing me other people like me who are already involved.

Access

“I can physically, logistically, and financially do this.” Here we’re looking at where and when your thing occurs, how much it costs, how much advance notice is given, physical accessibility (for people with disabilities or other such needs), childcare, transportation, how I would actually sign up for the thing, and how all of these interact with my own needs, schedule, finances, and so on.

Belonging

“I feel like I fit in here.” Assuming I get to this stage and join your thing, will I feel like I belong and am part of it? This is distinct from “identification” because identification is about imagining the future, while belonging is about my experience of the present. Are the organisers and other participants welcoming? Is the space safe? Are activities and facilities designed to support all participants? Am I feeling comfortable and having a good time?

Ownership

“I care enough to take responsibility for this.” If I belong, and have been involved for a while, I may begin to take ownership or responsibility. For instance, I might volunteer my time or skills, serve on the leadership team, or offer to run an activity. People in ownership roles are well placed to make sure that others make it through the inclusion pathway, to belonging and ownership.


If you’re interested in participating in an inclusivity workshop or would like to hire me to help your group, project, or event be more inclusive, get in touch.

Syndicated 2014-08-12 00:42:32 from Infotropism

11 Aug 2014 marnanel   » (Journeyer)

"miracle in the alcohol aisle" etc

...whether it's funny is orthogonal to the problem with Takei's "miracle in the alcohol aisle" joke-- that it reinforces a false idea people believe uncritically, which causes them to hurt vulnerable people.

Here's a parallel: I have a large number of books of jokes going back well over a hundred years. Some of them contain, for example, Jewish jokes (iirc there's a section called "Told Against Our Friend The Jew", which shows they already knew about the problem and printed them anyway). Some of these jokes may well be pants-wettingly hilarious for all I care; antisemitism still flourishes, and I'm still not using that material. It's the punching up vs punching down distinction.

[a comment I have left in more than one discussion today; I am starting to think of it as "Told Against Our Friend The Jew" for short.]

This entry was originally posted at http://marnanel.dreamwidth.org/308748.html. Please comment there using OpenID.

Syndicated 2014-08-11 17:56:19 (Updated 2014-08-11 18:17:13) from Monument

11 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine!

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

    <html>
    <body>
        <h1>Title</h1>
        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>
        </div>
    </body>
</html>

  

The following syntax is allowed:

  • Balanced tags: <p>...</p>
  • Attributes with quoted values: id="main"
  • Text nodes: <em>world</em>

Everything else is unsupported, including:

  • Namespaces: <html:body>
  • Self-closing tags: <br/> or <br> with no closing tag
  • Character encoding detection.
  • Escaped characters (like &amp;) and CDATA blocks.
  • Comments, processing instructions, and doctype declarations.
  • Error handling (e.g. unbalanced or improperly nested tags).

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

    struct Parser {
    pos: uint,
    input: String,
}

  

We can use this to implement some simple methods for peeking at the next characters in the input:

    impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {
        self.input.as_slice().char_at(self.pos)
    }

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {
        self.input.as_slice().slice_from(self.pos).starts_with(s)
    }

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()
    }

    // ...
}

  

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

        /// Return the current character, and advance to the next character.
    fn consume_char(&mut self) -> char {
        let range = self.input.as_slice().char_range_at(self.pos);
        self.pos = range.next;
        range.ch
    }

  

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

        /// Consume characters until `test` returns false.
    fn consume_while(&mut self, test: |char| -> bool) -> String {
        let mut result = String::new();
        while !self.eof() && test(self.next_char()) {
            result.push_char(self.consume_char());
        }
        result
    }

  

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

        /// Consume and discard zero or more whitespace characters.
    fn consume_whitespace(&mut self) {
        self.consume_while(|c| c.is_whitespace());
    }

    /// Parse a tag or attribute name.
    fn parse_tag_name(&mut self) -> String {
        self.consume_while(|c| match c {
            'a'..'z' | 'A'..'Z' | '0'..'9' => true,
            _ => false
        })
    }

  

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

        /// Parse a single node.
    fn parse_node(&mut self) -> dom::Node {
        match self.next_char() {
            '<' => self.parse_element(),
            _   => self.parse_text()
        }
    }

    /// Parse a text node.
    fn parse_text(&mut self) -> dom::Node {
        dom::text(self.consume_while(|c| c != '<'))
    }

  

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

        /// Parse a single element, including its open tag, contents, and closing tag.
    fn parse_element(&mut self) -> dom::Node {
        // Opening tag.
        assert!(self.consume_char() == '<');
        let tag_name = self.parse_tag_name();
        let attrs = self.parse_attributes();
        assert!(self.consume_char() == '>');

        // Contents.
        let children = self.parse_nodes();

        // Closing tag.
        assert!(self.consume_char() == '<');
        assert!(self.consume_char() == '/');
        assert!(self.parse_tag_name() == tag_name);
        assert!(self.consume_char() == '>');

        dom::elem(tag_name, attrs, children)
    }

  

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

        /// Parse a single name="value" pair.
    fn parse_attr(&mut self) -> (String, String) {
        let name = self.parse_tag_name();
        assert!(self.consume_char() == '=');
        let value = self.parse_attr_value();
        (name, value)
    }

    /// Parse a quoted value.
    fn parse_attr_value(&mut self) -> String {
        let open_quote = self.consume_char();
        assert!(open_quote == '"' || open_quote == '\'');
        let value = self.consume_while(|c| c != open_quote);
        assert!(self.consume_char() == open_quote);
        value
    }

    /// Parse a list of name="value" pairs, separated by whitespace.
    fn parse_attributes(&mut self) -> dom::AttrMap {
        let mut attributes = HashMap::new();
        loop {
            self.consume_whitespace();
            if self.next_char() == '>' {
                break;
            }
            let (name, value) = self.parse_attr();
            attributes.insert(name, value);
        }
        attributes
    }

  

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

        /// Parse a sequence of sibling nodes.
    fn parse_nodes(&mut self) -> Vec<dom::Node> {
        let mut nodes = vec!();
        loop {
            self.consume_whitespace();
            if self.eof() || self.starts_with("</") {
                break;
            }
            nodes.push(self.parse_node());
        }
        nodes
    }

  

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

    /// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
        nodes.swap_remove(0).unwrap()
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)
    }
}

  

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

    // <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
root.children.push(body);
body.children.push(text("Hello, world!"));

  

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

Syndicated 2014-08-11 15:00:00 from Matt Brubeck

14 Aug 2014 mikal   » (Journeyer)

Juno nova mid-cycle meetup summary: social issues

Summarizing three days of the Nova Juno mid-cycle meetup is a pretty hard thing to do - I'm going to give it a go, but just in case I miss things, there is an etherpad with notes from the meetup at https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup. I'm also going to do it in the form of a series of posts, so as to not hold up any content at all in the wait for perfection. This post covers the mechanics of each day at the meetup, reviewer burnout, and the Juno release.

First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event -- it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should "always" be in the US because that's where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we've run two of these I think they're clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.

I understand that sometimes life gets in the way, but that's the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.

I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can't co-locate with every other team working on OpenStack, but I'd like to see us pick a couple of teams -- who we might be blocking -- each cycle and invite them to co-locate with us. It's easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.

The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.

Reviewer burnout

We started off talking about core reviewer burnout, and what we expect from core. We've previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores "on the same page". The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.

The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.

We expect cores to use their judgement about what is a trivial change.

I have an action item to remind cores that this is acceptable behavior. I'm going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don't want to drown people in email all at once.

Juno release

We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.

There was talk about picking project sponsored "themes" for each release -- with the obvious examples being "stability" and "features". One problem here is that we haven't had a lot of luck convincing developers and reviewers to actually work on things we've specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn't been a glowing success.

One solution we're going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we'll have less time for the current sub-team updates, but people seem ok with that.

A few people have also suggested various interpretations of a "review day". One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.

Conclusion

As I mentioned earlier, this is the first in a series of posts. In this post I've tried to cover social aspects of nova -- the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout - and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I'll leave that for another post. It's already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you're dying to know more about what we talked about, don't forget the relatively comprehensive notes in our etherpad.

Tags for this post: openstack juno nova mid-cycle summary core review social
Related posts: Michael's surprisingly unreliable predictions for the Havana Nova release; More reviews; Book reviews; Juno Nova PTL Candidacy; What US address should I give?; Working on review comments for Chapters 2, 3 and 4 tonight

Comment

Syndicated 2014-08-13 23:57:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

10 Aug 2014 jas   » (Master)

Wifi on S3 with Replicant

I’m using Replicant on my main phone. As I’ve written before, I didn’t get Wifi to work. The other day leth in #replicant pointed me towards a CyanogenMod discussion about a similar issue. The fix does indeed work, and allowed me to connect to wifi networks and to setup my phone for Internet sharing. Digging deeper, I found a CM Jira issue about it, and ultimately a code commit. It seems the issue is that more recent S3′s comes with a Murata Wifi chipset that uses MAC addresses not known back in the Android 4.2 (CM-10.1.3 and Replicant-4.2) days. Pulling in the latest fixes for macloader.cpp solves this problem for me, although I still need to load the non-free firmware images that I get from CM-10.1.3. I’ve created a pull request fixing macloader.cpp for Replicant 4.2 if someone else is curious about the details. You have to rebuild your OS with the patch for things to work (if you don’t want to, the workaround using /data/.cid.info works fine), and install some firmware blobs as below.

adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

flattr this!

Syndicated 2014-08-10 18:02:37 from Simon Josefsson's blog

9 Aug 2014 vicious   » (Master)

Numbers again

I like to get fake enraged when people get caught up in very silly misunderstanding of numbers. And often this misunderstanding is used by politicians, extremists, and others to manipulate those people.

The most recent and prominent example is the Israel-Gaza conflict. The story is that Hamas rockets are endangering Israeli lives. OK, yes they do, but you have to always look at the scale. Since 2001 (so in the last 13 years), there have been 40 deaths in Israel from Gaza cross border rocket and mortar fire [1]. 13 of those are Soldiers, and one could make the case that those were military targets, but let’s ignore that and count 40. Of those fatalities, 23 happened during one of the operations designed to remove the rocket threat. One could argue (though I won’t, there won’t be a need) that without those operations, only 17 Israelis would have died.

OK, so 40 people in 13 years. That’s approximately 3 deaths per year. When I recently read an article about toxicity of mushrooms [2], the person being interviewed (the toxicologists) made an argument that mushroom picking in Czech is not dangerous, only 2-3 deaths per year (and vast majority of Czechs will go mushroom picking every year, even we do when we’re over there, it’s a Czech thing). Clearly it is not enough to worry the toxicologists. And Czech is about the same size as Israel in terms of population.

When my granddad (a nuclear physicist) with his team computed the number of new cancers due to Chernobyl in Czech after the disaster, they predicted 200 a year. Trouble was they couldn’t verify it, because 200 a year gets hidden in the noise. In a country of 10 million, if we take 80 year lifespan, 125000 people will die every year of various causes, a large proportion of them to cancer (many of them preventable cancer that the state does not try to prevent since it would mean unpopular policies). So 3 deaths a year is completely lost in the noise. So if Israel would spend the money it spends on fighting Hamas, even if we buy into the propaganda that this will defeat Hamas, if it would spend the money on an anti-smoking campaign, it would likely save far more Israeli lives every year.

Another statistic to look at is car accidents: In Israel approximately 263 people died on the road in one year. So your chances of dying in a car accident are almost 100 times larger. In fact, death by lighting in the US average at 51 per year [4]. Rescaling to Israel (I could not find Israel numbers) you get about 1.4 a year. That’s only slightly less than what Hamas kills in a year. In fact it’s about how many civilians they kill. So your chances as an Israeli civilian of being killed by a rocket or mortar are about the same as being killed by lightning. Not being struck by lightning by the way, because only 1 in 5 (approximately) of those struck die. So your chances of being struck by lightning are bigger, let’s say approx 6-7 people in Israel will get struck by lightning every year.

An argument could be made that most Israelis killed were in Sderot (I count 8), so your chances of being killed there are bigger (and correspondingly, your chances of being killed by a Hamas rocket based on current data if you are in say Tel Aviv are zero). Anyway, 8 in 13 years is about 0.6 a year in Sderot. Rescaling (based on population) the car deaths to estimate the number of deaths per year in Sderot we get approximately 0.8 deaths per year in Sderot. So your chances of dying in a car crash are higher, even if you live in Sderot (your chances of dying from cancer are much much higher).

The politicians supporting Israel’s actions often don’t worry about logical contradictions stemming from the above facts. When Bloomberg visited Israel he lambasted the fact that some airlines stopped flying to Tel-Aviv on rocket fears. He said that he never felt safer there. Well, if he never felt safer, why is Gaza being bombed?

None of this in any way is an apology for Hamas. Hamas is a terrorist organization that aims to kill civilians. But Hamas is almost comically incompetent at doing so. If it weren’t a sad thing, we ought to laugh at Hamas. Anyway, clearly an incompetent murderer is till a murderer if he manages to kill even one person. The question is, what lengths should you go to to apprehend him.

Now the costs. Gaza has 1.8 million people. One should expect about 20-30 thousand natural deaths per year. This year, Israel kills 2000 Gazans. So very approximately about 1 in 10 to 1 in 15 Gazans that dies this year will die at Israel’s hands. Now think what that does to the ability of Hamas, or even far more radical groups to recruit. My grandmother grew up in the post WWII years, so she has not lived through WWII as an adult, yet she harbored deep hatred of all Germans. This was common in Europe. People used to hate each other, and then every once in a while they would attempt to kill each other. If you are on the receiving side of the killing (and only your relatives get killed), you are very likely to have this deep hatred that could very well be used by extremists (think back to the Balkans and especially Bosnia). It takes generations to get rid of it. I don’t resent the Poles or the Ukrainians even though I had Galitian Jewish ancestors that probably had no love for either. But 70 years ago, these three groups managed to literally destroy the whole region by killing each other (well the Germans and Russians helped greatly in the endeavor). There is probably very few people in the region whose family actually comes from there.

The point is, that the cost of the operation, besides the moral outrage of killing thousands of people, is pushing back the date that Israel can live in peace with its neighbors.

Another cost to Israel is the rise of anti-semitism. Just like violent actions by muslim extremists created a wave of anti-muslim sentiment in the West, violent acts by Israel will only strenghten anti-semitic forces. You are giving them perfect recruiting stories. If they had any doubts, they don’t have them now. Maybe as a positive aspect, it will justify Israeli contention that hatred of Israel is based on anti-semitism, since by creating anti-semitic sentiment, yes, more of it will be. The fact that the deputy speaker of parliament in Israel calls for conquest of Gaza are forcefully removing the Gazans into “tent camps” and them out of Israel [5], does not help. Maybe he should have called it “final solution” to the problem. Surely there would be no problem with that phrase.

Another thing about numbers is that Israel depends on America giving it cover (and weapons). Israel does not realize that american opinion is shifting (in part due to its own actions, in part since these things always shift in time). See [6]. Basically once the young of today will be the older folks of tomorrow, Israel won’t be seen the way it is now. It might be that US will also shift towards some other group as being important in american politics. Given the growth in the Latino population, it should be clear that Israel will at some point stop being the priority for many politicians. Given that we will also approach the world oil peak, and that us oil will once again start running out once we’ve fracked out what we could frack out, oil will become again more important. And Israel does not have oil. Israel is religiously important, but recall that Americans are mostly protestant Christians, not Jews, so it’s not clear where that will go (looking at history of that relationship is not very encouraging). A large percentage of Americans thinks the world is only a few thousand years old and the end of times will come within their lifetime and the battle of Armageddon will come, and blah blah blah. Who knows what that does to long term foreign policy.

Remember I am talking about decades not years. One should worry about what happens in 20, 30, 40 years. You know, many of us will still be around then, so even from a very selfish perspective, one should plan 40 years ahead. Let alone if one is not a selfish bastard.

Then finally there is this moral thing about killing others … Let’s not get into morals of the situation, that’s seriously f@#ked up.

[1] http://mondoweiss.net/2014/07/rocket-deaths-israel.html
[2] http://www.lidovky.cz/muchomurku-cervenou-fasovali-vojaci-v-bitvach-misto-alkoholu-p63-/media.aspx?c=A140807_154055_ln-media_sho
[3] http://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate
[4] http://www.lightningsafety.noaa.gov/fatalities.htm
[5] http://www.dailymail.co.uk/news/article-2715466/Israeli-official-calls-concentration-camps-Gaza-conquest-entire-Gaza-Strip-annihilation-fighting-forces-supporters.html
[6] http://www.mintpressnews.com/latest-gallup-poll-shows-young-americans-overwhelmingly-support-palestine/194856/


Syndicated 2014-08-09 16:43:30 from The Spectre of Math

9 Aug 2014 etbe   » (Master)

Being Obviously Wrong About Autism

I’m watching a Louis Theroux documentary about Autism (here’s the link to the BBC web site [1]). The main thing that strikes me so far (after watching 7.5 minutes of it) is the bad designed of the DLC-Warren school for Autistic kids in New Jersey [2].

A significant portion of people on the Autism Spectrum have problems with noisy environments, whether most Autistic people have problems with noise depends on what degree of discomfort is considered a problem. But I think it’s most likely to assume that the majority of kids on the Autism Spectrum will behave better in a quiet environment. So any environment that is noisy will cause more difficult behavior in most Autistic kids and the kids who don’t have problems with the noise will have problems with the way the other kids act. Any environment that is more prone to noise pollution than is strictly necessary is hostile to most people on the Autism Spectrum and all groups of Autistic people.

The school that is featured in the start of the documentary is obviously wrong in this regard. For starters I haven’t seen any carpet anywhere. Carpeted floors are slightly more expensive than lino but the cost isn’t significant in terms of the cost of running a special school (such schools are expensive by private-school standards). But carpet makes a significant difference to ambient noise.

Most of the footage from that school included obvious echos even though they had an opportunity to film when there was the least disruption – presumably noise pollution would be a lot worse when a class finished.

It’s not difficult to install carpet in all indoor areas in a school. It’s also not difficult to install rubber floors in all outdoor areas in a school (it seems that most schools are doing this already in play areas for safety reasons). For a small amount of money spent on installing and maintaining noise absorbing floor surfaces the school could achieve better educational results. The next step would be to install noise absorbing ceiling tiles and wallpaper, that might be a little more expensive to install but it would be cheap to maintain.

I think that the hallways in a school for Autistic kids should be as quiet as the lobby of a 5 star hotel. I don’t believe that there is any technical difficulty in achieving that goal, making a school look as good as an expensive hotel would be expensive but giving it the same acoustic properties wouldn’t be difficult or expensive.

How do people even manage to be so wrong about such things? Do they never seek any advice from any adult on the Autism Spectrum about how to run their school? Do they avoid doing any of the most basic Google searches for how to create a good environment for Autistic people? Do they just not care at all and create an environment that looks good to NTs? If they are just trying to impress NTs then why don’t they have enough pride to care that people like me will know how bad they are? These aren’t just rhetorical questions, I’d like to know what’s wrong with those people that makes them do their jobs in such an amazingly bad way.

Related posts:

  1. Autism, Food, etc James Purser wrote “Stop Using Autism to Push Your Own...
  2. Autism and a Child Beauty Contest Fenella Wagener wrote an article for the Herald Sun about...
  3. Autism Awareness and the Free Software Community It’s Autism Awareness Month April is Autism Awareness month, there...

Syndicated 2014-08-09 16:01:46 from etbe - Russell Coker

9 Aug 2014 Stevey   » (Master)

Rebooting the CMS

I run a cluster for the Debian Administration website, and the code is starting to show its age. Unfortunately the code is not so modern, and has evolved a lot of baggage.

Given the relatively clean separation between the logical components I'm interested in trying something new. In brief the current codebase allows:

  • Posting of articles, blog-entries, and polls.
  • The manipulation of the same.
  • User-account management.

It crossed my mind the other night that it might make sense to break this code down into a number of mini-servers - a server to handle all article-related things, a server to handle all poll-related things, etc.

If we have a JSON endpoint that will allow:

  • GET /article/32
  • POST /article/ [create]
  • GET /articles/offset/number [get the most recent]

Then we could have a very thin shim/server on top of that whihc would present the public API. Of course the internal HTTP overhead might make this unworkable, but it is an interesting approach to the problem, and would allow the backend storage to be migrated in the future without too much difficulty.

At the moment I've coded up two trivial servers, one for getting user-data (to allow login requests to succeed), and one for getting article data.

There is a tiny presentation server written to use those back-end servers and it seems like an approach that might work. Of course deployment might be a pain..

It is still an experiment rather than a plan, but it could work out: http://github.com/skx/snooze/.

Syndicated 2014-08-09 08:59:51 from Steve Kemp's Blog

8 Aug 2014 mbrubeck   » (Journeyer)

Let's build a browser engine!

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles describing my project and how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

    struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,
}

  

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

    enum NodeType {
    Text(String),
    Element(ElementData),
}

  

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

    struct ElementData {
    tag_name: String,
    attributes: AttrMap,
}

type AttrMap = HashMap<String, String>;

  

Finally, some constructor functions to make it easy to create new nodes:

    impl Node {
    fn new(children: Vec<Node>, node_type: NodeType) -> Node {
        Node { children: children, node_type: node_type }
    }
}

fn text(data: String) -> Node {
    Node::new(vec!(), Text(data))
}

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node::new(children, Element(ElementData {
        tag_name: name,
        attributes: attrs,
    }))
}

  

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

Exercises

These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.

References

Here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

Syndicated 2014-08-08 16:40:00 from Matt Brubeck

8 Aug 2014 marnanel   » (Journeyer)

Gentle Readers: fought and feared and felt

Gentle Readers
a newsletter made for sharing
volume 1, number 17
7th August 2014: we fought and feared and felt
What I’ve been up to

We are, more or less, properly moved to Salford now. There's a vanful of our stuff still in Oldham, and another vanful in Staines, due to assorted mishaps along the way, but at least Kit and I and Yantantessera are safely moved in. Sooner or later we'll go and pick the other stuff up, when times are more vannish-- and after all, what else does time do?

I apologise for another GR hiatus earlier this week: I was hit by a car while crossing the road, which caused a break in service, but fortunately no break in bones. My leg is quite impressively bruised, though.

A poem of mine

RETWEETED (T103)

Jill retweeted what I wrote,
forwarding to all her friends.
Time, you thief, who loves to gloat
over hopes and bitter ends,
say my loves and lines are bad,
say that life itself defeated me,
say I'm growing old, but add:
Jill retweeted me.

(After "Jenny kissed me" by James Leigh Hunt.)

A picture

http://gentlereaders.uk/pics/fb-teletext-100

http://gentlereaders.uk/pics/fb-teletext-220
 

 

 
Those who weren't around in the 1980s in the UK may need to know that this is a parodic representation of Facebook as if it had been around at the time of the BBC'S much-loved CEEFAX service. Gentle reader Dan Sheppard sent me a link to a recording of CEEFAX On View for those who never saw it and those who'd like to refresh their memories. 

Something from someone else

Some people will tell you that Rudyard Kipling was a cultural imperalist and a racist; these people have often not looked very hard into his work. The last line of this poem, a plea for cultural diversity, is quoted fairly often; I think the rest of the poem is worth reading too, and I'm afraid I habitually quote the last two stanzas at people far too often.

"Certified by Traill" is a sarcastic reference: when Tennyson died in 1892, there was some discussion as to who should be the new poet laureate, and a man named H. D. Traill wrote an article listing fifty possible contenders. He added Kipling's name as the fifty-first, as an afterthought.

IN THE NEOLITHIC AGE
by Rudyard Kipling

In the Neolithic Age, savage warfare did I wage
For food and fame and woolly horses' pelt.
I was singer to my clan in that dim red Dawn of Man,
And I sang of all we fought and feared and felt.

Yea, I sang as now I sing, when the Prehistoric spring
Made the piled Biscayan ice-pack split and shove;
And the troll and gnome and dwerg, and the Gods of Cliff and Berg
Were about me and beneath me and above.

But a rival, of Solutré, told the tribe my style was outré—
'Neath a tomahawk, of diorite, he fell.
And I left my views on Art, barbed and tanged, below the heart
Of a mammothistic etcher at Grenelle.

Then I stripped them, scalp from skull, and my hunting-dogs fed full,
And their teeth I threaded neatly on a thong;
And I wiped my mouth and said, "It is well that they are dead,
For I know my work is right and theirs was wrong."

But my Totem saw the shame; from his ridgepole-shrine he came,
And he told me in a vision of the night: —
"There are nine and sixty ways of constructing tribal lays,
And every single one of them is right!"

* * * *

Then the silence closed upon me till They put new clothing on me
Of whiter, weaker flesh and bone more frail;
And I stepped beneath Time's finger, once again a tribal singer,
And a minor poet certified by Traill!

Still they skirmish to and fro, men my messmates on the snow
When we headed off the aurochs turn for turn;
When the rich Allobrogenses never kept amanuenses,
And our only plots were piled in lakes at Berne.

Still a cultured Christian age sees us scuffle, squeak, and rage,
Still we pinch and slap and jabber, scratch and dirk;
Still we let our business slide— as we dropped the half-dressed hide—
To show a fellow-savage how to work.

Still the world is wondrous large— seven seas from marge to marge—
And it holds a vast of various kinds of man;
And the wildest dreams of Kew are the facts of Khatmandhu,
And the crimes of Clapham chaste in Martaban.

Here's my wisdom for your use, as I learned it when the moose
And the reindeer roamed where Paris roars to-night:—
"There are nine and sixty ways of constructing tribal lays,
And— every— single— one— of— them— is— right!"

Postscript from me: Though you know there came a day when they found another way, but rejected it— for "seventy" won't scan.

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://thomasthurman.org/gentle/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/308692.html. Please comment there using OpenID.

Syndicated 2014-08-07 23:28:42 (Updated 2014-08-21 23:23:50) from Monument

7 Aug 2014 Jordi   » (Master)

A pile of reasons why GNOME should be Debian jessie’s default desktop environment

GNOME has, for some reason or another, always been the default desktop environment in Debian since the installer is able to install a full desktop environment by default. Release after release, Debian has been shipping different versions of GNOME, first based on the venerable 1.2/1.4 series, then moving to the time-based GNOME 2.x series, and finally to the newly designed 3.4 series for the last stable release, Debian 7 ‘wheezy’.

During the final stages of wheezy’s development, it was pointed out that the first install CD image would not longer hold all of the required packages to install a full GNOME desktop environment. There was lots of discussion surrounding this bug or fact, and there were two major reactions to it. The Debian GNOME team rebuilt some key packages so they would be compressed using xz instead of gzip, saving the few megabytes that were needed to squeeze everything in the first CD. In parallel, the tasksel maintainer decided switching to Xfce as default desktop was another obvious fix. This change, unannounced and two days before the freeze, was very contested and spurred the usual massive debian-devel threads. In the end, and after a few default desktop flip flops, it was agreed that GNOME would remain as the default for the already frozen wheezy release, and this issue would be revisited later on during jessie’s development.

And indeed, some months ago, Xfce was again reinstated as Debian’s default desktop for jessie as announced:

Change default desktop to xfce.

This will be re-evaluated before jessie is frozen. The evaluation will
start around the point of DebConf (August 2014). If at that point gnome
looks like a better choice, it’ll go back as the default.

Some criteria for that choice will include:

* Popcon numbers for gnome on jessie. If gnome installations continue to
  rise fast enough despite xfce being the default (compared with, say
  kde installations), then we’ll know that users prefer gnome.
  Currently we have no data about how many users would choose gnome when
  it’s not the default. Part of the reason for switching to xfce now
  is to get such data.

* The state of accessability support, particularly for the blind.

* How well the UI works for both new and existing users. Gnome 3
  seems to be adding back many gnome 2 features that existing users
  expect, as well as making some available via addons. If it feels
  comfortable to gnome 2 (and xfce) users, that would go a long way
  toward switching back to it as the default. Meanwhile, Gnome 3 is also
  breaking new ground in its interface; if the interface seems more
  welcoming to new users, or works better on mobile devices, etc, that
  would again point toward switching back.

* Whatever size constraints exist for CD or other images at the time.

--

Hello to all the tech journalists out there. This is pretty boring.
Why don’t you write a story about monads instead?

― Joey Hess in dfca406eb694e0ac00ea04b12fc912237e01c9b5.

Suffice to say that the Debian GNOME team participants have never been thrilled about how the whole issue is being handled, and we’ve been wondering if we should be doing anything about it, or just move along and enjoy the smaller amount of bug reports against GNOME packages that this change would bring us, if it finally made it through to the final release. During our real life meet-ups in FOSDEM and the systemd+GNOME sprint in Antwerp, most members of the team did feel Debian would not be delivering a graphical environment with the polish we think our users deserve, and decided we at least should try to convince the rest of the Debian project and our users that Debian will be best suited by shipping GNOME 3.12 by default. Power users, of course, can and know how to get around this default and install KDE, Xfce, Cinnamon, MATE or whatever other choice they have. For the average user, though, we think we should be shipping GNOME by default, and tasksel should revert the above commit again. Some of our reasons are:

  • Accessibility: GNOME continues to be the only free desktop environment that provides full accessibility coverage, right from login screen. While it’s true GNOME 3.0 was lacking in many areas, and GNOME 3.4 (which we shipped in wheezy) was just barely acceptable thanks to some last minute GDM fixes, GNOME 3.12 should have ironed out all of the issues and our non-expert understanding is that a11y support is now on par with what GNOME 2.30 from squeeze offered.
  • Downstream health: The number of active members in the team taking care of GNOME in Debian is around 5-10 persons, while it is 1-2 in the case of Xfce. Being the default desktop draws a lot of attention (and bug reports) that only a bigger team might have the resources to handle.
  • Upstream health: While GNOME is still committed to its time-based release schedule and ships new versions every 6 months, Xfce upstream is, unfortunately, struggling a bit more to keep up with new plumbing technology. Only very recently it has regained support to suspend/hibernate via logind, or support for Bluez 5.x, for example.
  • Community: GNOME is one of the biggest free software projects, and is lucky to have created an ecosystem of developers, documenters, translators and users that interact regularly in a live social community. Users and developers gather in hackfests and big, annual conferences like GUADEC, the Boston Summit, or GNOME.Asia. Only KDE has a comparable community, the rest of the free desktop projects don’t have the userbase or manpower to sustain communities like this.
  • Localization: Localization is more extensive and complete in GNOME. Xfce has 18 languages above 95% of coverage, and 2 at 100% (excluding English). GNOME has 28 languages above 95%, 9 of them being complete (excluding English).
  • Documentation: Documentation coverage is extensive in GNOME, with most of the core applications providing localized, up to date and complete manuals, available in an accessible format via the Help reader.
  • Integration: The level of integration between components is very high in GNOME. For example, instant messaging, agenda and accessibility components are an integral part of the desktop. GNOME is closely integrated to NetworkManager, PulseAudio, udisks and upower so that the user has access to all the plumbing in a single place. GNOME also integrates easily with online accounts and services (ownCloud, Google, MS Exchange…).
  • Hardware: GNOME 3.12 will be one of the few desktop environments to support HiDPI displays, now very common on some laptop models. Lack of support for HiDPI means non-technical users will get an unreadable desktop by default, and no hints on how to fix that.
  • Security: GNOME is more secure. There are no processes launched with root permissions on the user’s session. All everyday operations (package management, disk partitioning and formatting, date/time configuration…) are accomplished through PolicyKit wrappers.
  • Privacy: One of the latest focuses of GNOME development is improving privacy, and work is being done to make it easy to run GNOME applications in isolated containers, integrate Tor seamlessly in the desktop experience, better disk encryption support and other features that should make GNOME a more secure desktop environment for end users.
  • Popularity: One of the metrics discussed by the tasksel change proponents mentioned popcon numbers. 8 months after the desktop change, Xfce does not seem to have made a dent on install numbers. The Debian GNOME team doesn’t feel popcon’s data is any better than a random online poll though, as it’s an opt-in service which the vast majority of users don’t enable.
  • systemd embracing: One of the reasons to switch to Xfce was that it didn’t depend on systemd. But now that systemd is the default, that shouldn’t be a problem. Also given ConsoleKit is deprecated and dead upstream, KDE and Xfce are switching or are planning to switch to systemd/logind.
  • Adaptation: Debian forced a big desktop change with the wheezy release (switching from the traditional GNOME 2.x to the new GNOME Shell environment. Switching again would mean more adaptation for uses when they’ve had two years to experience GNOME 3.4. Furthermore, GNOME 3.12 means two years of improvements and polishing to GNOME 3.4, which should help with some of the rough edges found in the GNOME release shipped with wheezy.
  • Administration: GNOME is easy to administrate. All the default settings can be defined by administrators, and mandatory settings can be forced to users, which is required in some companies and administrations; Xfce cannot do that. The close integration with freedesktop components (systemd, NM, PulseAudio…) also gives access to specific and useful administration tools.

In short, we think defaulting to GNOME is the best option for the Debian release, and in contrast, shipping Xfce as the default desktop could mean delivering a desktop experience that has some incomplete or rough edges, and not on par with Debian quality standards for a stable release. We believe tasksel should again revert the change and be uploaded as soon as possible, in order to get people testing images with GNOME the sooner the better, with the freeze only two months away.

We would also like that in the future, changes of this nature will not be announced in a git commit log, but widely discussed in debian-project and the other usual development/decision channels, like the change of init system happened recently. We will, whichever the final decision is, continue to package GNOME with great care to ensure our users get the best possible desktop experience Debian can offer.

Syndicated 2014-08-07 23:58:00 from I still don't have a title

6 Aug 2014 etbe   » (Master)

Booting GPT

I’m installing new 4TB disks on an older Dell server, it’s a PowerEdge T110 with a G6950 CPU so it’s not really old, but it’s a couple of generations behind the latest Dell servers.

I tried to enable UEFI booting, but when I turned that option on the system locked up during the BIOS process (wouldn’t boot from the CD or take keyboard input). So I had to make it boot with a BIOS compatible MBR and a GPT partition table.

Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048            4095  1024.0 KiB  EF02  BIOS boot partition
  2            4096        25169919  12.0 GiB    FD00  Linux RAID
  3        25169920      7814037134  3.6 TiB    8300  Linux filesystem

After spending way to much time reading various web pages I discovered that the above partition table works. The 1MB partition is for GRUB code and needs to be enabled by a parted command such as the following:

parted /dev/sda set 1 bios_grub on

/dev/sda2 is a RAID-1 array used for the root filesystem. If I was installing a non-RAID system I’d use the same partition table but with a type of 8300 instead of FD00. I have a RAID-1 array over sda2 and sdb2 for the root filesystem and sda3, sdb3, sdc3, sdd3, and sde3 are used for a RAID-Z array. I’m reserving space for the root filesystem on all 5 disks because it seems like a good idea to use the same partition table and the 12G per disk that is unused on sdc, sdd, and sde isn’t worth worrying about when dealing with 4TB disks.

Related posts:

  1. booting from USB for security Sune Vuorela asks about how to secure important data such...
  2. How I Partition Disks Having had a number of hard drives fail over the...
  3. Resizing the Root Filesystem Uwe Hermann has described how to resize a root filesystem...

Syndicated 2014-08-06 06:53:34 from etbe - Russell Coker

5 Aug 2014 joey   » (Master)

abram's 2014

pics from trip to Abram's Falls

The trail to Abram's Falls seems more trecherous as we get older, but the sights and magic of the place are unchanged in our first visit in years.

Syndicated 2014-08-05 19:37:37 from see shy jo

5 Aug 2014 marnanel   » (Journeyer)

Salford Royal is not a cheese shop

I had to pick something up at Salford Royal's main reception desk. I walk for quite a way following signs. I reach a desk.

"Is this the main reception?"
"No, you want to go that way."

I go that way, and find a sign saying "Main Reception" pointing back the way I'd come. So I go back, and go to WHSmith's and ask for directions to the main reception, and a description of it. It is in fact the desk I found first. I return.

"Sorry," I say, "I mean I get confused easily, but people tell me this is the main reception."
"Oh no, this is car parking."
"So.. that sign behind you saying RECEPTION, that's not true?"
"Look, I told you, go that way and then down the stairs."
"Isn't that the way to Outpatients?"
"That's what you want, isn't it?
"No, I want the main reception."

But he sends me to Outpatients. Outpatients say, "Oh no, we're not the main reception, we're Outpatients."

"But the man on the main reception desk, who claimed it wasn't the main reception desk, said it was you instead."

"Oh, he's always doing that."

MAYBE HE USED TO RUN A CHEESE SHOP

This entry was originally posted at http://marnanel.dreamwidth.org/308412.html. Please comment there using OpenID.

Syndicated 2014-08-05 17:29:42 from Monument

5 Aug 2014 jas   » (Master)

Replicant 4.2 0002 and NFC on I9300

During my vacation the Replicant project released version 4.2-0002 as a minor update to their initial 4.2 release. I didn’t anticipate any significant differences, so I followed the installation instructions but instead of “wipe data/factory reset” I chose “wipe cache partition” and rebooted. Everything appeared to work fine, but I soon discovered that NFC was not working. Using adb logcat I could get some error messages:

E/NFC-HCI ( 7022): HCI Timeout - Exception raised - Force restart of NFC service
F/libc    ( 7022): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 7046 (message)
I/DEBUG   ( 1900): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
I/DEBUG   ( 1900): Build fingerprint: 'samsung/m0xx/m0:4.1.1/JRO03C/I9300XXDLIB:user/release-keys'
I/DEBUG   ( 1900): Revision: '12'
I/DEBUG   ( 1900): pid: 7022, tid: 7046, name: message  >>> com.android.nfc 

The phone would loop trying to start NFC and having the NFC sub-system die over and over. Talking on #replicant channel, paulk quickly realized and fixed the bug. I had to rebuild the images to get things to work, so I took the time to create a new virtual machine based on Debian 7.5 for building Replicant on. As a side note, the only thing not covered by Replicant build dependency documentation was that I needed the Debian xmllint package to avoid a build failure and the Debian xsltproc package to avoid a error message being printed in the beginning of every build. Soon I had my own fresh images and installed them and NFC was working again, after installing the non-free libpn544_fw.so file.

During this, I noticed that there are multiple libpn544_fw.so files floating around. I have the following files:

version string source
libpn544_fw_C3_1_26_SP.so internet
libpn544_fw_C3_1_34_SP.so stock ROM on S3 bought in Sweden during 2013 and 2014 (two phones)
libpn544_fw_C3_1_39_SP.so internet

(For reference the md5sum's of these files are 682e50666effa919d557688c276edc48, b9364ba59de1947d4588f588229bae20 and 18b4e634d357849edbe139b04c939593 respectively.)

If you do not have any of these files available as /vendor/firmware/libpn544_fw.so you will get the following error message:

I/NfcService( 2488): Enabling NFC
D/NFCJNI  ( 2488): Start Initialization
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
E/NFCJNI  ( 2488): phLibNfc_Mgt_Initialize() returned 0x00ff[NFCSTATUS_FAILED]
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFC-HCI ( 2488): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFCJNI  ( 2488): Firmware update FAILED
E/NFCJNI  ( 2488): Unable to update firmware, giving up
D/NFCJNI  ( 2488): phLibNfc_Mgt_UnConfigureDriver() returned 0x0000[NFCSTATUS_SUCCESS]
D/NFCJNI  ( 2488): Terminating client thread...
W/NfcService( 2488): Error enabling NFC

Using the first (26) file or the last (39) file does not appear to be working on my phone, I get the following error messages. Note that the line starting with 'NFC capabilities' has 'Rev = 34' in it, possibly indicating that I need the version 34 file.

I/NfcService( 5735): Enabling NFC
D/NFCJNI  ( 5735): Start Initialization
D/NFCJNI  ( 5735): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 8
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
D/NFCJNI  ( 5735): Download new Firmware
W/NFCJNI  ( 5735): Firmware update FAILED
E/NFCJNI  ( 5735): Unable to update firmware, giving up
D/NFCJNI  ( 5735): phLibNfc_Mgt_UnConfigureDriver() returned 0x0000[NFCSTATUS_SUCCESS]
D/NFCJNI  ( 5735): Terminating client thread...
W/NfcService( 5735): Error enabling NFC

Loading the 34 works fine.

I/NfcService( 2501): Enabling NFC
D/NFCJNI  ( 2501): Start Initialization
D/NFCJNI  ( 2501): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2501): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2501): 
D/NFCJNI  ( 2501): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2501): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2501): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2501): NFC Initialized
D/NdefPushServer( 2501): start, thread = null
D/NdefPushServer( 2501): starting new server thread
D/NdefPushServer( 2501): about create LLCP service socket
D/NdefPushServer( 2501): created LLCP service socket
D/NdefPushServer( 2501): about to accept
D/NfcService( 2501): NFC-EE OFF
D/NfcService( 2501): NFC-C ON

What is interesting is, that my other S3 running CyanogenMod does not have the libpn544_fw.so file but still NFC works. The messages are:

I/NfcService( 2619): Enabling NFC
D/NFCJNI  ( 2619): Start Initialization
E/NFC-HCI ( 2619): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFC     ( 2619): Firmware image not available: this device might be running old NFC firmware!
D/NFCJNI  ( 2619): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2619): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2619): 
D/NFCJNI  ( 2619): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2619): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2619): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2619): NFC Initialized
D/NdefPushServer( 2619): start, thread = null
D/NdefPushServer( 2619): starting new server thread
D/NdefPushServer( 2619): about create LLCP service socket
D/NdefPushServer( 2619): created LLCP service socket
D/NdefPushServer( 2619): about to accept
D/NfcService( 2619): NFC-EE OFF
D/NfcService( 2619): NFC-C ON

Diffing the two NFC-relevant repositories between Replicant (external_libnfc-nxp and packages_apps_nfc) and CyanogenMod (android_external_libnfc-nxp and android_packages_apps_Nfc) I found a commit in Replicant that changes a soft-fail on missing firmware to a hard-fail. I manually reverted that patch in my build tree, and rebuilt and booted a new image. Enabling NFC now prints this on my Replicant phone:

I/NfcService( 2508): Enabling NFC
D/NFCJNI  ( 2508): Start Initialization
E/NFC-HCI ( 2508): Could not open /system/vendor/firmware/libpn544_fw.so or /system/lib/libpn544_fw.so
W/NFC     ( 2508): Firmware image not available: this device might be running old NFC firmware!
D/NFCJNI  ( 2508): NFC capabilities: HAL = 8150100, FW = b10122, HW = 620003, Model = 12, HCI = 1, Full_FW = 1, Rev = 34, FW Update Info = 0
D/NFCJNI  ( 2508): phLibNfc_SE_GetSecureElementList()
D/NFCJNI  ( 2508): 
D/NFCJNI  ( 2508): > Number of Secure Element(s) : 1
D/NFCJNI  ( 2508): phLibNfc_SE_GetSecureElementList(): SMX detected, handle=0xabcdef
D/NFCJNI  ( 2508): phLibNfc_SE_SetMode() returned 0x000d[NFCSTATUS_PENDING]
I/NFCJNI  ( 2508): NFC Initialized
D/NdefPushServer( 2508): start, thread = null
D/NdefPushServer( 2508): starting new server thread
D/NdefPushServer( 2508): about create LLCP service socket
D/NdefPushServer( 2508): created LLCP service socket
D/NdefPushServer( 2508): about to accept
D/NfcService( 2508): NFC-EE OFF
D/NfcService( 2508): NFC-C ON

And NFC works! One less non-free blob on my phone.

I have double-checked that power-cycling the phone (even removing battery for a while) does not affect anything, so it seems the NFC chip has firmware loaded from the factory.

Question remains why that commit was added. Is it necessary on some other phone? I have no idea, other than if the patch is reverted, S3 owners will have NFC working with Replicant without non-free software added. Alternatively, make the patch apply only on the platform where it was needed, or even to all non-S3 builds.

flattr this!

Syndicated 2014-08-05 12:20:18 from Simon Josefsson's blog

5 Aug 2014 Stevey   » (Master)

Free (orange) SMS alerts

In the past I used to pay for an email->SMS gateway, which was used to alert me about some urgent things. That was nice because it was bi-directional, and at one point I could restart particular services via sending SMS messages.

These days I get it for free, and for my own reference here is how you get to receive free SMS alerts via Orange, which is my mobile phone company. If you don't use Orange/EE this will probably not help you.

The first step is to register an Orange email-account, which can be done here:

Once you've done that you'll have an email address of the form example@orange.net, which is kinda-sorta linked to your mobile number. You'll sign in and be shown something that looks like webmail from the early 90s.

The thing that makes this interesting is that you can look in the left-hand menu and see a link called "SMS Alerts". Visit it. That will let you do things like set the number of SMSs you wish to receive a month (I chose "1000"), and the hours during which delivery will be made (I chose "All the time").

Anyway if you go through this dance you'll end up with an email address example@orange.net, and when an email arrives at that destination an SMS will be sent to your phone.

The content of the SMS will be the subject of the mail, truncated if necessary, so you can send a hello message to yourself like this:

echo "nop" | mail -s "Hello, urgent message is present" username@orange.net

Delivery seems pretty reliable, and I've scheduled the mailbox to be purged every week, to avoid it getting full:

Hostname pop.orange.net
Username Your mobile number
Password Your password

If you wished to send mail from this you can use smtp.orange.net, but I pity the fool who used their mobile phone company for their primary email address.

Syndicated 2014-08-05 09:56:48 from Steve Kemp's Blog

5 Aug 2014 bagder   » (Master)

I’m eight months in on my Mozilla adventure

I started working for Mozilla in January 2014. Here’s some reflections from my first time as Mozilla employee.

Working from home

I’ve worked completely from home during some short periods before in my life so I had an idea what it would be like. So far, it has been even better than I had anticipated. It suits me so well it is almost scary! No commutes. No delays due to traffic. No problems ever with over-crowded trains or buses. No time wasted going to work and home again. And I’m around when my kids get home from school and it’s easy to receive deliveries all days. I don’t think I ever want to work elsewhere again… :-)

Another effect of my work place is also that I probably have become somewhat more active on social networks and IRC. If I don’t use those means, I may spent whole days without talking to any humans.

Also, I’m the only Mozilla developer in Sweden – although we have a few more employees in Sweden.

Daniel's home office

The freedom

I have freedom at work. I control and decide a lot of what I do and I get to do a lot of what I want at work. I can work during the hours I want. As long as I deliver, my employer doesn’t mind. The freedom isn’t just about working hours but I also have a lot of control and saying about what I want to work on and what I think we as a team should work on going further.

The not counting hours

For the last 16 years I’ve been a consultant where my customers almost always have paid for my time. Paid by the hour I spent working for them. For the last 16 years I’ve counted every single hour I’ve worked and made sure to keep detailed logs and tracking of whatever I do so that I can present that to the customer and use that to send invoices. Counting hours has been tightly integrated in my work life for 16 years. No more. I don’t count my work time. I start work in the morning, I stop work in the evening. Unless I work longer, and sometimes I start later. And sometimes I work on the weekend or late at night. And I do meetings after regular “office hours” many times. But I don’t keep track – because I don’t have to and it would serve no purpose!

The big code base

I work with Firefox, in the networking team. Firefox has about 10 million lines C and C++ code alone. Add to that everything else that is other languages, glue logic, build files, tests and lots and lots of JavaScript.

It takes time to get acquainted with such a large and old code base, and lots of the architecture or traces of the original architecture are also designed almost 20 years ago in ways that not many people would still call good or preferable.

Mozilla is using Mercurial as the primary revision control tool, and I started out convinced I should too and really get to learn it. But darn it, it is really too similar to git and yet lots of words are intermixed and used as command but they don’t do the same as for git so it turns out really confusing and yeah, I felt I got handicapped a little bit too often. I’ve switched over to use the git mirror and I’m now a much happier person. A couple of months in, I’ve not once been forced to switch away from using git. Mostly thanks to fancy scripts and helpers from fellow colleagues who did this jump before me and already paved the road.

C++ and code standards

I’m a C guy (note the absence of “++”). I’ve primarily developed in C for the whole of my professional developer life – which is approaching 25 years. Firefox is a C++ fortress. I know my way around most C++ stuff but I’m not “at home” with C++ in any way just yet (I never was) so sometimes it takes me a little time and reading up to get all the C++-ishness correct. Templates, casting, different code styles, subtleties that isn’t in C and more. I’m slowly adapting but some things and habits are hard to “unlearn”…

The publicness and Bugzilla

I love working full time for an open source project. Everything I do during my work days are public knowledge. We work a lot with Bugzilla where all (well except the security sensitive ones) bugs are open and public. My comments, my reviews, my flaws and my patches can all be reviewed, ridiculed or improved by anyone out there who feels like doing it.

Development speed

There are several hundred developers involved in basically the same project and products. The commit frequency and speed in which changes are being crammed into the source repository is mind boggling. Several hundred commits daily. Many hundred and sometimes up to a thousand new bug reports are filed – daily.

yet slowness of moving some bugs forward

Moving a particular bug forward into actually getting it land and included in pending releases can be a lot of work and it can be tedious. It is a large project with lots of legacy, traditions and people with opinions on how things should be done. Getting something to change from an old behavior can take a whole lot of time and massaging and discussions until they can get through. Don’t get me wrong, it is a good thing, it just stands in direct conflict to my previous paragraph about the development speed.

In the public eye

I knew about Mozilla before I started here. I knew Firefox. Just about every person I’ve ever mentioned those two brands to have known about at least Firefox. This is different to what I’m used to. Of course hardly anyone still fully grasp what I’m actually doing on a day to day basis but I’ve long given up on even trying to explain that to family and friends. Unless they really insist.

Vitriol and expectations of high standards

I must say that being in the Mozilla camp when changes are made or announced has given me a less favorable view on the human race. Almost anything or any chance is received by a certain amount of users that are very aggressively against the change. All changes really. “If you’ll do that I’ll be forced to switch to Chrome” is a very common “threat” – as if that would A) work B) be a browser that would care more about such “conservative loonies” (you should consider that my personal term for such people)). I can only assume that the Chrome team also gets a fair share of that sort of threats in the other direction…

Still, it seems a lot of people out there and perhaps especially in the Free Software world seem to hold Mozilla to very high standards. This is both good and bad. This expectation of being very good also comes from people who aren’t even Firefox users – we must remain the bright light in a world that goes darker. In my (biased) view that tends to lead to unfair criticisms. The other browsers can do some of those changes without anyone raising an eyebrow but when Mozilla does similar for Firefox, a shitstorm breaks out. Lots of those people criticizing us for doing change NN already use browser Y that has been doing NN for a good while already…

Or maybe I’m just not seeing these things with clear enough eyes.

How does Mozilla make money?

Yeps. This is by far the most common question I’ve gotten from friends when I mention who I work for. In fact, that’s just about the only question I get from a lot of people… (possibly because after that we get into complicated questions such as what exactly do I do there?)

curl and IETF

I’m grateful that Mozilla allows me to spend part of my work time working on curl.

I’m also happy to now work for a company that allows me to attend to IETF/httpbis and related activities much better than ever I’ve had the opportunity to in the past. Previously I’ve pretty much had to spend spare time and my own money, which has limited my participation a great deal. The support from Mozilla has allowed me to attend to two meetings so far during the year, in London and in NYC and I suspect there will be more chances in the future.

Future

I only just started. I hope to grab on to more and bigger challenges and tasks as I get warmer and more into everything. I want to make a difference. See you in bugzilla.

Syndicated 2014-08-05 08:56:08 from daniel.haxx.se

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users14002
Observer9886
Apprentice746
Journeyer2340
Master1026

New Advogato Members

Recently modified projects

20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
10 Jan 2014 libstdc++

New projects

8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction
28 Mar 2013 Snapper
5 Jan 2013 Templer