Recent blog entries

2 Oct 2014 Skud   » (Master)

Travels: London and Berlin (Oct 7th-20th, ish)

I haven’t mentioned this on here yet so I thought I’d better do so before I actually, you know, board the plane.

I’m heading over to Europe next week and the week after. The main reason I’m going is AdaCamp in Berlin, which I will be helping run, but before and after that I’ll also be spending some time in the UK and running this Growstuff event, to get stuck into some serious code with some of our UK-based developers, in London on Oct 18-19.

If you are in the UK and are interested in food innovation, open data, technology for social good, sustainability, inclusive open source projects, or related fields, I would love to meet you! If you can’t make it to the Growstuff code sprint but would like to catch up for a coffee or something, drop me a line.

Syndicated 2014-10-02 04:29:13 from Infotropism

1 Oct 2014 danstowell   » (Journeyer)

Carpenters Estate - Is it viable or not?

Newham Council has handled the current Carpenters Estate protest shockingly badly. Issuing a press release describing the protesting mothers as "agitators and hangers-on" is just idiotically bad handling.

BUT they have also described Carpenters Estate as not "viable", and many commentators (such as Zoe Williams, Russell Brand) have lampooned them for it. After all, they can see the protesting mothers occupying a perfectly decent-looking little home. How can it be not "viable"?

Are they judging viability compared against the market rate for selling off the land? That's what Zoe Williams says, and that's what I assumed too from some conversations. But that's not it at all.

Newham's current problem with the Carpenters Estate is basically caused by the two different types of housing stock on the estate:

  • They have some tall old tower blocks which housed many hundreds of people, but they can't renovate them to a basic decent standard - the council can't afford to do it themselves and the leaseholders couldn't afford to shoulder the costs. (In council reports it's been calculated that the renovation cost per flat would cost more than the value of the flat itself - which means that the private leaseholders totally wouldn't be able to get a mortgage for the renovations.)
  • All the little two-storey houses next to the tower blocks are basically viable, at least in the sense that they should be easy to refurbish. However, they can't just leave people in those houses if they intend to demolish the tower blocks. I'm no expert in demolition but I assume it'd be impossible to demolish the 23-storey block next door while keeping the surrounding houses safe, and that's why Doran Walk is also slated for demolition.

So "not viable" means they can't find any way to refurbish those tower blocks to basic living standards - especially not in the face of the Tory cuts to council budgets - and that affects the whole estate as well as just the tower blocks. This appears to be the fundamental reason they're "decanting" people, in order to demolish and redevelop the whole place. (Discussed eg in minutes from 2012.) It's also the reason they have a big PR problem right now, because those two-storey houses appear "viable" and perfectly decent homes, yet they do indeed have a reason to get everyone out of them!

After the UCL plan for Carpenters Estate fell through it's understandable that they're still casting around for development plans, and we might charitably assume the development plans would be required to include plenty of social housing and affordable housing. You can see from the council minutes that they do take this stuff seriously when they approve/reject plans.

(Could the council simply build a whole new estate there, develop a plan itself, without casting around for partners? Well yes, it's what councils used to do before the 1980s. It's not their habit these days, and there may be financial constraints that make it implausible, but in principle I guess it must be an option. Either way, that doesn't really affect the question of viability, which is about the current un-demolished estate.)

But the lack of a plan has meant that there's no obvious "story" of what's supposed to be happening with the estate, which just leaves space for people to draw their own conclusions. I don't think anyone's deliberately misrepresenting what the council means when they talk about viability. I think the council failed badly in some of its early communication, and that led to misunderstandings that fed too easily into a narrative of bureaucratic excuses.

Syndicated 2014-10-01 16:15:10 (Updated 2014-10-01 16:35:26) from Dan Stowell

1 Oct 2014 mdz   » (Master)

Join me in supporting The Ada Initiative

When I first read that Linux kernel developer Valerie Aurora would be changing careers to work full-time on behalf of women in open source communities, I never imagined it would lead so far so fast. Today, The Ada Initiative is a non-profit organization with global reach, whose programs have helped create positive change for women in a wide range of communities beyond open source. Building on this foundation, imagine how much more they can do in the next four years! That’s why I’m pledging my continuing support, and asking you to join me.

For the next 7 days, I will personally match your donations up to $4,096. My employer, Heroku (Salesforce.com), will match my donations too, so every dollar you contribute will be tripled!

My goal is that together we will raise over $12,000 toward The Ada Initiative’s 2014 fundraising drive.

Donate now

Since about 1999, I had been working in open source communities like Debian and Ubuntu, where women are vastly underrepresented even compared to the professional software industry. Like other men in these communities, I had struggled to learn what I could do to change this. Such a severe imbalance can only be addressed by systemic change, and I hardly knew where to begin. I worked to raise awareness by writing and speaking, and joined groups like Debian Women, Ubuntu Women and Geek Feminism. I worked on my own bias and behavior to avoid being part of the problem myself. But it never felt like enough, and sometimes felt completely hopeless.

Perhaps worst of all, I saw too many women burning out from trying to change the system. It was often taxing just to participate as a woman in a male-dominated community, and the extra burden of activism seemed overwhelming. They were all volunteers, doing this work in evenings and weekends around work or study, and it took a lot of time, energy and emotional reserve to deal with the backlash they faced for speaking out about sexism. Valerie Aurora and Mary Gardiner helped me to see that an activist organization with full-time staff could be part of the solution. I joined the Ada Initiative advisory board in February 2011, and the board of directors in April.

founders_laughing

Today, The Ada Initiative is making a difference not only in my community, but in my workplace as well. When I joined Heroku in 2012, none of the engineers were women, and we clearly had a lot of work to do to change that. In 2013, I attended AdaCamp SF along with my colleague Peter van Hardenberg, joining the first “allies track”, open to participants of any gender, for people who wanted to learn the skills to support the women around them. We’ve gone on to host two ally skills workshops of our own for Heroku employees, one taught by Ada Initiative staff and another by a member of our team, security engineer Leigh Honeywell. These workshops taught interested employees simple, everyday ways to take positive action to challenge sexism and create a better workplace for women. The Ada Initiative also helped us establish a policy for conference sponsorship which supports our gender diversity efforts. Today, Heroku engineering includes about 10% women and growing. The Ada Initiative’s programs are helping us to become the kind of company we want to be.leigh-eeoc-ally-skills-workshop

I attended the workshop with a group of Heroku colleagues, and it was a powerful experience to see my co-workers learning tactics to support women and intervene in sexist situations. Hearing them discuss power and privilege in the workplace, and the various “a-ha!” moments people had, were very encouraging and made me feel heard and supported.
– Leigh Honeywell

If you want to see more of these programs from The Ada Initiative, please contribute now:
Donate now


Syndicated 2014-10-01 16:30:23 from We'll see | Matt Zimmerman

1 Oct 2014 bagder   » (Master)

Good bye Rockbox

I’m officially not taking part in anything related to Rockbox anymore. I’ve unsubscribed and I’m out.

In the fall of 2001, my friend Linus and my brother Björn had both bought the portable Archos Player, a harddrive based mp3 player and slightly underwhelmed by its firmware decided they would have a go at trying to improve it. All three of us had been working with embedded systems for many years already and I was immediately attracted to the idea of reverse engineering this kind of device and try to improve it. It sounded like a blast to me.

In December 2001 we had the first test program actually running on the device and flashing a led. The first little step of what would become a rather big effort. We wrote a GPLed mp3 player firmware replacement, entirely from scratch without re-using any original parts. A full home-grown tiny multitasking operating system with a UI.

Fast-forwarding through history: we managed to get a really good firmware done for the early Archos players and we managed to move on to follow-up mp3 players too. After a decade or so, we supported well over 60 different mp3 player models and we played every music format known to man, we usually had better battery life than the original firmwares. We could run doom and we had a video player, a plugin system and a system full of crazy things.

We gathered large amounts of skilled and intelligent hackers from all over the world who contributed to make this possible. We had yearly meetups, or developer conferences, and we hung out on IRC every day of the week. I still hang out on our off-topic IRC channel!

Over time, smart phones emerged as the preferred devices people would use to play music while on the go. We ported Rockbox over to Android as an app, but our pixel-based UI was never really suitable for the flexible Android world and I also think that most contributors were more interested in hacking devices than writing Android apps. The app never really attracted many users or developers so while functional it never “took off”.

mp3 players are now already a thing of the past and will soon fall into the cave of forgotten old things our children will never even know or care about.

Developers and users of Rockbox have mostly moved on to other ventures. I too stopped actually contributing to the project several years ago but I was running build clients for a long while and I’ve kept being subscribed to the development mailing list. Until now. I’m now finally cutting off the last rope. Good bye Rockbox, it was fun while it lasted. I had a massive amount of great fun and I learned a lot while in the project.

Rockbox

Syndicated 2014-10-01 08:53:48 from daniel.haxx.se

1 Oct 2014 mikal   » (Journeyer)

On layers

There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.

I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.

What are layers?

Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):

  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar


Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).

Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.

Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.

What do layers give us?

Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.


We intend this diagram to amaze and confuse our victims


Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.

Do layers help us work out what OpenStack should focus on?

Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.

Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.

A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.

Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?

Is there consensus on what sits in each layer?

Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.

The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.

Let's ignore tents for now

The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.

Conclusion

Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.

Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.

So here are the layers as I see them for now:

  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar


I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.

I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.

In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.

Tags for this post: openstack kilo technical committee tc layers
Related posts: My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment

Syndicated 2014-09-30 18:57:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

1 Oct 2014 Skud   » (Master)

Why I just stopped using IM (hint: fucking Google)

tl;dr – if we usually talk on IM/GTalk you won’t see me around any more. Use IRC, email, or other mechanisms (listed at bottom of this post) to contact me.


Background: Google stopped supporting open standards for IM a few years ago.

Other background: when I changed my name in 2011 I grabbed a GMail account with that name, just in case it would be useful. I didn’t use it, though — instead I forwarded any mail from it to my actual email address, the one I’ve had since the turn of the century: skud@infotrope.net, and set that address as my default for everything I could find.

Unfortunately Google didn’t honour those preferences, and kept exposing my unused GMail address to people. When I signed up for Google Groups, it would be exposed. When I shared Google Docs, it would be exposed. I presume it was being exposed all kinds of other ways, too, because people kept seeing my GMail address and thinking it was the right way to contact me. So in addition to the forwarding I also set up a vacation reminder telling anyone who emailed me there to use my actual address and not to use the Google one.

But Google wasn’t done yet. They kept dropping stuff into my GMail account and not forwarding it. Comments on Google docs. Invitations. Administrative notices. IM logs that I most definitely did not want archived. These were all piling up silently in an account I never logged into.

Eventually, after I missed out on several messages from a volunteer offering to help with Growstuff, I got fed up and found out how to completely delete a GMail account. I did this few weeks ago.

Fast forward to last night, when my Internet connection flaked out right before I went to bed. I looked at all my disconnected, blank windows, shrugged, and crashed for the night. This morning, everything was better and all my apps set about reconnecting.

Except that Adium, the app I use for instant messaging, was asking me for the GTalk password for skud@infotrope.net. Weird, I thought, but I had the password saved in my keychain and resubmitted it. Adium, or more properly GTalk, didn’t like it. I tried a few more times, including resetting my app password (I use two-factor auth). No luck.

Eventually I found the problem. Via this Adium bug report I learned that a GMail account is required to use GTalk. Even if you don’t use (and have never used) your GMail address to login to it, and don’t give people a GMail address to add you as a contact.

So, my choices at this point are:

  1. Sign up again for GMail, continue to have an unused and unwanted email address exposed to the public, miss important messages, and risk security/privacy problems with archiving of stuff I don’t want archived; or,
  2. Set up Jabber/XMPP, which will take a fair amount of messing around (advice NOT wanted, I know what is involved), and which will only let me talk to friends who don’t use GMail/GTalk (a small minority); or,
  3. Not be available on IM.

For now I am going with option 3. If you are used to talking to me via IM at my skud@infotrope.net address, you can now contact me as follows.

IRC: I am Skud on irc.freenode.net and on some other specialist networks. On Freenode I habitually hang around on #growstuff and intermittently on other channels. Message me any time; if I’m not awake/online I’ll see it when I return.

Email: skud@infotrope.net as ever, or skud@growstuff.org for Growstuff and related work.

Social media: I’m on social media hiatus and won’t be using it to chat at length, but still check mentions/messages semi-regularly.

Text/SMS: If you have my number, you know where to find me.

Voice/video (including phone, Skype, etc): By arrangement. Email me if you want to set something up.

To my good friends who I used to chat to all the time and now won’t see around so much: please let me know if you use Jabber/XMPP and if so what your address is; if you do, then I’ll prioritise getting that set up.

Syndicated 2014-09-30 23:57:30 from Infotropism

30 Sep 2014 etbe   » (Master)

Links September 2014

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Related posts:

  1. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...
  2. Links May 2014 Charmian Gooch gave an interesting TED talk about her efforts...
  3. Links September 2013 Matt Palmer wrote an insightful post about the use of...

Syndicated 2014-09-30 13:55:48 from etbe - Russell Coker

30 Sep 2014 mikal   » (Journeyer)

Blueprints implemented in Nova during Juno

As we get closer to releasing the RC1 of Nova for Juno, I've started collecting a list of all the blueprints we implemented in Juno. This was mostly done because it helps me write the release notes, but I am posting it here because I am sure that others will find it handy too.

Process



Ongoing behind the scenes work

Object conversion

Scheduler
  • Support sub-classing objects. launchpad specification
  • Stop using the scheduler run_instance method. Previously the scheduler would select a host, and then boot the instance. Instead, let the scheduler select hosts, but then return those so the caller boots the instance. This will make it easier to move the scheduler to being a generic service instead of being internal to nova. launchpad specification
  • Refactor the nova scheduler into being a library. This will make splitting the scheduler out into its own service later easier. launchpad specification
  • Move nova to using the v2 cinder API. launchpad specification
  • Move prep_resize to conductor in preparation for splitting out the scheduler. launchpad specification


API
  • Use JSON schema to strongly validate v3 API request bodies. Please note this work will later be released as v2.1 of the Nova API. launchpad specification
  • Provide a standard format for the output of the VM diagnostics call. This work will be exposed by a later version of the v2.1 API. launchpad specification
  • Move to the OpenStack standard name for the request id header, in a backward compatible manner. launchpad specification
  • Implement the v2.1 API on the V3 API code base. This work is not yet complete. launchpad specification


Other
  • Refactor the internal nova API to make the nova-network and neutron implementations more consistent. launchpad specification


General features

Instance features

Networking

Scheduling
  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification


Other
  • i18n Enablement for Nova, turn on the lazy translation support from Oslo i18n and updating Nova to adhere to the restrictions this adds to translatable strings. launchpad specification
  • Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
  • Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
  • Include status information in API listings of hypervisor hosts. launchpad specification
  • Allow API callers to specify more than one status to filter by when listing services. launchpad specification
  • Add quota values to constrain the number and size of server groups a users can create. launchpad specification


Hypervisor driver specific

Hyper-V

Ironic

libvirt

vmware
  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification


Tags for this post: openstack juno blueprints implemented

Comment

Syndicated 2014-09-30 05:05:00 (Updated 2014-09-30 21:08:59) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

30 Sep 2014 Skud   » (Master)

Open food interoperability: entities, unique IDs, and semantic equivalence

This is a post I made on Growstuff Talk to propose some initial steps towards interoperability for open food projects. If you have comments, probably best to make them on that post.


I wanted to post about some concepts from my past open data work which have been very much in my mind when working on Growstuff, but which I’m not sure I’ve ever expressed in a way that helps everyone understand their importance.

Just for background: from 2007-2011 I worked on Freebase, a massive general-purpose open data repository which was acquired by Google in 2010 and now forms part of their “Knowledge” area. While working at Google I also worked as a liaison between Google search/knowledge and the Wikimedia Foundation, and presented at a Wikimedia data summit where we proposed the first stages of what would become Wikidata — an entity-based data store for all of Wikimedia’s other projects.

Freebase and Wikidata are part of what is broadly known as the Semantic Web, which has to do with providing data and meaning via web technologies, using common data formats etc.


The Semantic Web movement has several different branches, ranging from the extremely abstract and academic, to the quite mundane and pragmatic. Some of the more common bits of Semantic Web technology you might have come across are microformats, for instance, which let you add semantic meaning to your HTML markup, for instance for defining the meanings of links to things like licenses or for marking up recipes on food blogs and the like. There is also Semantic Mediawiki which adds some semantic features on top of a wiki, to allow you to query for information in interesting ways; Practical Plants uses SMW and its search is based on this semantic data.

At the more academic end of the Semantic Web world are things like RDF which creates a directed graph of semantic data which can be queried via a language called SPARQL, and attempts to define data standards and ontologies for a wide range of purposes. These are generally heavyweight and mostly of interest to researchers, academics, etc, though some aspects of this work are starting to seep through into consumer technology.

This is all background, however. What I wanted to talk about was the single most important thing we learned while working on Freebase, which is this:

Entities must have unique identifiers.

Here’s what I mean. Let’s say you know three people all called Mary Smith. Then someone says, “It’s Mary Smith’s birthday today.” Which one are they referring to? You don’t know. In any system based around knowledge, you need to have some kind of unique ID for each entity to avoid ambiguity. So instead you might say, “Mary Smith, whose employee number is E453425″ or “Mary Smith, whose email address is mary@example.com”, or “Mary Smith, whose primary key in our database is 789″.

When working on our proposal for phase 1 of Wikidata, one of the things we realised is that the Wikimedia community — all the languages of Wikipedia, the Wikimedia Commons, etc — lacked unique identifiers for real-world entities. For instance, Barack Obama was http://en.wikipedia.org/wiki/Barack_Obama on English Wikipedia and http://de.wikipedia.org/wiki/Barack_Obama on German Wikipedia and http://commons.wikimedia.org/wiki/Barack_Obama on Wikimedia Commons and http://en.wikinews.org/wiki/Category:Barack_Obama on Wikinews, but none of these was his definitive identifier.

Meanwhile, interwiki links — the links between English and German and French and Swahili and Korean wikipedias — were maintained by hand (or, actually, by a bot) that had to update every wikipedia whenever a page was added or changed on any of them. This was a combinatoric exercise: with 2 wikis, there are two links (A -> B and B <- A). With 5 wikis there are (4 + 3 + 2 + 1) * 2 links. With N wikis, there are N-1! * 2 links, or to put it another way, 50 wikis would mean 1.2165637e+63 links between them. This was wildly inefficient to maintain!

Wikidata’s “phase 1″ was to create an entity store for Wikimedia projects, where each concept or entity — “Barack Obama” or “semantic web” or “tomato” — would have a central identity which could be linked to. Then, each Wikimedia project could say “This page describes entity XYZ”, or conversely Wikidata could say “this entity is described on these pages”, and suddenly the work of the interwiki bot became much easier: it meant that each new wiki added would only mean one new link, not an exponentially-expanding web of links.

We are in a similar position with open food data at present. There are dozens of open source food projects and that list doesn’t even touch on the ones that are more connected to recipes/eating/nutrition. We’re talking about how to interoperate between our various projects, but the key to interoperability is entity identification. If someone wants to mash up Growstuff’s harvest data with Openrecipes recipe search or the US FDA’s nutrition data, they need to know that Growstuff’s tomato is the same as the tomato you use in spaghetti sauce or the tomato that contains some percent of your RDA of potassium.

So how do we do this? None of our projects are sufficiently established, mature, or complete to claim the right to be the central ID repository. Apart from that, many of us have different focuses — edible plants, all types of plants, all types of living things, and all types of food (including non-animal/non-plant food) are some of the scopes I can mention offhand. Even the wide-ranging species databases like the Encyclopedia of Life don’t capture such information as crop varieties (eg. roma tomato, habanero pepper) that are important to veggie gardeners like Growstuff’s members.

Here’s what I would propose as an interim measure.

All open food projects need to link their major entities (eg. “crops” in Growstuff’s case) to one or more large, open, API-accessible data stores.

Examples of these include:

  • Wikipedia (any language, but English has the most articles)
  • Wikidata
  • Freebase
  • Encyclopedia of Life

By doing this, we can match data between projects. For instance, if Growstuff’s “tomato” links to the same entity as OpenFarm’s “tomato” and OpenFoodNetwork’s “tomato” and OpenRecipes’ “tomato” then we can reasonably assume they’re all talking about the same thing.

Also, some of the above data sources provide APIs which allow us to pivot easily between data sets. For instance, Freebase’s query language allows you to ask questions like “given an entity that is identified as ‘tomato’ on English Wikipedia, what is its identify on the Encyclopedia of Life?”

To see this in action, paste the following query into Freebase’s interactive query editor:

    [{
      "a:key": [{
        "namespace": "/wikipedia/en",
        "value": "Tomato"
      }],
      "b:key": [{
        "namespace": "/biology/eol",
        "value": null
      }]    
    }]

As you’ll see, the result is “392557” or to put it another way http://eol.org/pages/392557 — the EOL page on tomatoes.

From day 1, Growstuff has been tracking Wikipedia links for all our crops, to enable this sort of query against Freebase and so easily pivot to other data sets that Freebase knows about. If other projects take similar steps, this means that we are well on our way toward interoperability.

(As an aside, this is why we’re also having this other discussion about what to do about crop varieties that don’t have their own Wikipedia page, as this messes up the 1-to-1 relationship between Wikipedia entities and Growstuff entities. This may be something we just have to deal with, however, as no external data set will exactly match ours.)

Next steps

  1. I strongly encourage all open food projects to link their “crops” or similar entities to one or more major, open-licensed, API-accessible data source (ideally one which has its keys in Freebase).
  2. We should all expose these links via our APIs, data dumps, or whatever other mechanisms we use to make our open data available.
  3. Developers should be able to request data from our APIs based on these identifiers, either through query parameters or through REST API resources like eg. /crops/eol/392557.json
  4. We should use semantic markup/links to denote this entity equivalence on our webpages, eg. if Growstuff links to a Practical Plants page on the same crop, there should be a standard way to say “we consider these pages to refer to the same entity”. I’m not sure exactly what this is, yet, but if we do this it will benefit web crawlers, search engines, and other non-API consumers of our websites.
  5. We should look into developing a microformat for expressing crop information on a webpage, in collaboration with microformats.org. I expect, however, that it will be very hard to develop a workable ontology, since (for instance) some of our projects are interested in planting information and some aren’t, some are interested in sale and distribution and others aren’t, some are dealing with non-edible plants and others aren’t, etc. It may have to be as simple as “this is a crop and here are the names we have for it”.
  6. It would be great to put together some kind of visualisation like the linked open data cloud to show which open food projects are providing interoperable identities and how they connect to each other.

I’d like to get buy-in from other open food data projects on at least the general idea of matching our “crop” entities (whatever we call them) against some of the big databases. Who’s in?

Syndicated 2014-09-30 02:11:13 from Infotropism

30 Sep 2014 mikal   » (Journeyer)

My candidacy for Kilo Compute PTL

This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:

I'd like another term as Compute PTL, if you'll have me.

We live in interesting times. openstack has clearly gained a large
amount of mind share in the open cloud marketplace, with Nova being a
very commonly deployed component. Yet, we don't have a fantastic
container solution, which is our biggest feature gap at this point.
Worse -- we have a code base with a huge number of bugs filed against
it, an unreliable gate because of subtle bugs in our code and
interactions with other openstack code, and have a continued need to
add features to stay relevant. These are hard problems to solve.

Interestingly, I think the solution to these problems calls for a
social approach, much like I argued for in my Juno PTL candidacy
email. The problems we face aren't purely technical -- we need to work
out how to pay down our technical debt without blocking all new
features. We also need to ask for understanding and patience from
those feature authors as we try and improve the foundation they are
building on.

The specifications process we used in Juno helped with these problems,
but one of the things we've learned from the experiment is that we
don't require specifications for all changes. Let's take an approach
where trivial changes (no API changes, only one review to implement)
don't require a specification. There will of course sometimes be
variations on that rule if we discover something, but it means that
many micro-features will be unblocked.

In terms of technical debt, I don't personally believe that pulling
all hypervisor drivers out of Nova fixes the problems we face, it just
moves the technical debt to a different repository. However, we
clearly need to discuss the way forward at the summit, and come up
with some sort of plan. If we do something like this, then I am not
sure that the hypervisor driver interface is the right place to do
that work -- I'd rather see something closer to the hypervisor itself
so that the Nova business logic stays with Nova.

Kilo is also the release where we need to get the v2.1 API work done
now that we finally have a shared vision for how to progress. It took
us a long time to get to a good shared vision there, so we need to
ensure that we see that work through to the end.

We live in interesting times, but they're also exciting as well.


I have since been elected unopposed, so thanks for that!

Tags for this post: openstack kilo compute ptl
Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers

Comment

Syndicated 2014-09-29 18:34:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

30 Sep 2014 Skud   » (Master)

Two frogs in a bowl of cream

A story I got from someone who says she got it from an older Dutch woman. I wouldn’t mention the Dutch woman thing except that this story just seems so Dutch to me. Anyway.

Two frogs fell into a bowl of cream. They swam and swam trying to get out, round and around in the cream, for hours.

Eventually one frog gave up, stopped swimming, and drowned.

The other frog kept swimming, refusing to give up. Finally the frog’s activity, splashing around in the cream, turned it to butter. It became solid in the bowl, and the frog was able to climb out.

The moral, I’m told, is that sometimes if you just keep kicking things will magically solidify under you and you’re can step up out of the trouble and move on. Also, apparently I’m frog #2. Trust me when I say it’s exhausting.

Syndicated 2014-09-30 01:16:37 from Infotropism

29 Sep 2014 bagder   » (Master)

A day in the curl project

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

Syndicated 2014-09-29 20:59:05 from daniel.haxx.se

29 Sep 2014 softkid   » (Journeyer)

Tips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

Syndicated 2014-09-29 11:03:47 from NaN

29 Sep 2014 amits   » (Journeyer)

KVM Forum 2014

The KVM Forums are a great way to learn and talk about the future of KVM virtualization. The KVM Forum has been co-located with the Linux Foundation’s LinuxCon events for the past several years, and this year too will be held along with LinuxCon EU in Dusseldorf, Germany.

The KVM Forums also are a great documentation resource on several features, and the slides and videos from the past KVM Forums are freely available online. This year’s Forum will be no different, and we’ll have all the material on the KVM wiki.

Syndicated 2014-09-29 07:39:55 from Think. Debate. Innovate.

28 Sep 2014 mako   » (Master)

Community Data Science Workshops Post-Mortem

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

cdsw_combo_images-1The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

Syndicated 2014-09-28 05:02:19 (Updated 2014-09-28 05:23:19) from copyrighteous

27 Sep 2014 danstowell   » (Journeyer)

Carpenters Estate, Stratford - some background

"A group of local mothers are squatting next to London’s Olympic Park to tell the government we need social housing, not social cleansing" as featured in the Guardian and on Russell Brand's Youtube channel. The estate is Carpenters Estate, Stratford.

"Carpenters Estate," I thought to myself, "that rings a bell..."

It turns out Carpenters Estate is the one that UCL had proposed in 2011 to redevelop into a new university campus. The Greater Carpenters Neighbourhood "has been earmarked for redevelopment since 2010". "All proposals will take into account existing commitments made by the Council to those people affected by the re-housing programme." However, locals raised concerns, as did UCL's own Bartlett School (architecture/planning school) students and staff. (There's a full report here written by Bartlett students.) In mid-2013 negotiations broke down between Newham and UCL and the idea was ditched.

It seems that the council, the locals and others have been stuck in disagreement about the future of the estate for a while. At first the council promised to re-house people without breaking up the community too much, then it realised it didn't know how to do that, and eventually it came to the point where it's just gradually "decanting" people from the area and hoping that other things such as "affordable housing" (a shadow of a substitute for social housing) will mop things up. I can see how they got here and I can see how they can't find a good resolution of all this. But the Focus E15 mothers campaign makes a really good point, that irrespective of the high land prices (which probably mean Newham Council get offered some tempting offers), the one thing East London needs is social housing to prevent low-income groups and long-time locals from being forced out of London by gentrification.

The gentrification was already well underway before the London Olympic bid was won, but that had also added extremely predictable extra heat to the housing market around there. One part of the Olympic plan included plenty of "affordable housing" on the site afterwards - in August 2012, housing charity Shelter said it was good that "almost half" of the new homes built in the Athlete's Village would be "affordable housing". Oh but then they calculated that it wouldn't be that affordable after all, since the rules had been relaxed so the prices could go as high as 80% of market rate. (80% of bonkers is still crazy.)

Oh and it wasn't "almost half" (even though in the Olympic Bid they had said it would be 50% of 9,000 homes), by this point the target had been scaled back officially to about 40%. In November 2012 Boris Johnson insisted "that more than a third of the 7,000 new homes in the Olympic Park would be affordable". The Mayor said: 'There’s no point in doing this unless you can accommodate all income groups.'"

Oh but then in January 2014 Boris Johnson announced that they were changing their mind, and instead of 40% affordable housing, it's now going to be 30%. "Fewer homes will be built overall, and a smaller than promised percentage of those would be affordable." ("The dream of affordable housing is fading," said Nicky Gavron.) The new target contravenes the House of Lords Select Committee on Olympic Legacy report 2013-14 which said "It is important that a fair proportion, at least [...] 35%, of this housing is affordable for, and accessible to, local residents". Boris Johnson said it was a "price well worth paying" as a trade off for more economic activity. Strange assertion to make, since East London has bucketloads of economic activity and a crisis in social and affordable housing!

P.S. and guess why they decided not to build as many homes as they had planned? It's to make room for a cultural centre codenamed Olympicopolis. (Compare against this 2010 map of planned housing in the park.) Plans for this are led by... UCL! Hello again UCL, welcome back into the story. I love UCL as much as anyone - I worked there for years - but we need to fix the housing crisis a billion times more than we need to solve UCL's real-estate issues.

Syndicated 2014-09-27 09:04:18 (Updated 2014-09-28 07:47:12) from Dan Stowell

26 Sep 2014 Stevey   » (Master)

Next week I shall be mostly in Kraków

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

Syndicated 2014-09-26 17:20:04 from Steve Kemp's Blog

26 Sep 2014 mikal   » (Journeyer)

The Decline and Fall of IBM: End of an American Icon?




ISBN: 0990444422
LibraryThing
This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.

Tags for this post: book robert_cringely ibm corporate decline
Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks


Comment

Syndicated 2014-09-26 00:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

26 Sep 2014 bagder   » (Master)

Changing networks with Firefox running

Short recap: I work on network code for Mozilla. Bug 939318 is one of “mine” – yesterday I landed a fix (a patch series with 6 individual patches) for this and I wanted to explain what goodness that should (might?) come from this!

diffstat

diffstat reports this on the complete patch series:

29 files changed, 920 insertions(+), 162 deletions(-)

The change set can be seen in mozilla-central here. But I guess a proper description is easier for most…

The bouncy road to inclusion

This feature set and associated problems with it has been one of the most time consuming things I’ve developed in recent years, I mean in relation to the amount of actual code produced. I’ve had it “landed” in the mozilla-inbound tree five times and yanked out again before it landed correctly (within a few hours), every time of course reverted again because I had bugs remaining in there. The bugs in this have been really tricky with a whole bunch of timing-dependent and race-like problems and me being unfamiliar with a large part of the code base that I’m working on. It has been a highly frustrating journey during periods but I’d like to think that I’ve learned a lot about Firefox internals partly thanks to this resistance.

As I write this, it has not even been 24 hours since it got into m-c so there’s of course still a risk there’s an ugly bug or two left, but then I also hope to fix the pending problems without having to revert and re-apply the whole series…

Many ways to connect to networks

Firefox Nightly screenshotIn many network setups today, you get an environment and a network “experience” that is crafted for that particular place. For example you may connect to your work over a VPN where you get your company DNS and you can access sites and services you can’t even see when you connect from the wifi in your favorite coffee shop. The same thing goes for when you connect to that captive portal over wifi until you realize you used the wrong SSID and you switch over to the access point you were supposed to use.

For every one of these setups, you get different DHCP setups passed down and you get a new DNS server and so on.

These days laptop lids are getting closed (and the machine is put to sleep) at one place to be opened at a completely different location and rarely is the machine rebooted or the browser shut down.

Switching between networks

Switching from one of the networks to the next is of course something your operating system handles gracefully. You can even easily be connected to multiple ones simultaneously like if you have both an Ethernet card and wifi.

Enter browsers. Or in this case let’s be specific and talk about Firefox since this is what I work with and on. Firefox – like other browsers – will cache images, it will cache DNS responses, it maintains connections to sites a while even after use, it connects to some sites even before you “go there” and so on. All in the name of giving the users an as good and as fast experience as possible.

The combination of keeping things cached and alive, together with the fact that switching networks brings new perspectives and new “truths” offers challenges.

Realizing the situation is new

The changes are not at all mind-bending but are basically these three parts:

  1. Make sure that we detect network changes, even if just the set of available interfaces change. Send an event for this.
  2. Make sure the necessary parts of the code listens and understands this “network topology changed” event and acts on it accordingly
  3. Consider coming back from “sleep” to be a network changed event since we just cannot be sure of the network situation anymore.

The initial work has been made for Windows only but it allows us to smoothen out any rough edges before we continue and make more platforms support this.

The network changed event can be disabled by switching off the new “network.notify.changed” preference. If you do end up feeling a need for that, I really hope you file a bug explaining the details so that we can work on fixing it!

Act accordingly

So what is acting properly? What if the network changes in a way so that your active connections suddenly can’t be used anymore due to the new rules and routing and what not? We attack this problem like this: once we get a “network changed” event, we “allow” connections to prove that they are still alive and if not they’re torn down and re-setup when the user tries to reload or whatever. For plain old HTTP(S) this means just seeing if traffic arrives or can be sent off within N seconds, and for websockets, SPDY and HTTP2 connections it involves sending an actual ping frame and checking for a response.

The internal DNS cache was a bit tricky to handle. I initially just flushed all entries but that turned out nasty as I then also killed ongoing name resolves that caused errors to get returned. Now I instead added logic that flushes all the already resolved names and it makes names “in transit” to get resolved again so that they are done on the (potentially) new network that then can return different addresses for the same host name(s).

This should drastically reduce the situation that could happen before when Firefox would basically just freeze and not want to do any requests until you closed and restarted it. (Or waited long enough for other timeouts to trigger.)

The ‘N seconds’ waiting period above is actually 5 seconds by default and there’s a new preference called “network.http.network-changed.timeout” that can be altered at will to allow some experimentation regarding what the perfect interval truly is for you.

Firefox BallInitially on Windows only

My initial work has been limited to getting the changed event code done for the Windows back-end only (since the code that figures out if there’s news on the network setup is highly system specific), and now when this step has been taken the plan is to introduce the same back-end logic to the other platforms. The code that acts on the event is pretty much generic and is mostly in place already so it is now a matter of making sure the event can be generated everywhere.

My plan is to start on Firefox OS and then see if I can assist with the same thing in Firefox on Android. Then finally Linux and Mac.

I started on Windows since Windows is one of the platforms with the largest amount of Firefox users and thus one of the most prioritized ones.

More to do

There’s separate work going on for properly detecting captive portals. You know the annoying things hotels and airports for example tend to have to force you to do some login dance first before you are allowed to use the internet at that location. When such a captive portal is opened up, that should probably qualify as a network change – but it isn’t yet.

Syndicated 2014-09-26 06:24:39 from daniel.haxx.se

25 Sep 2014 dyork   » (Master)

Tracking The Shellshock BASH Vulnerability – News, Tools and Links

shellshockWith all the attention today to the Shellshock vulnerability, I need a place to keep track of it for my own purposes.  If this page or list helps anyone else, that’s great, but this is primarily a tool for me to capture what’s going on.  I intend to be updating it regularly while this is all happening.  Suggestions are of course welcome in comments.

Note that I have links here to discussion threads on Hacker News.  The comment threads are often fully of incredibly useful information.

Security Advisories

Testing Tools

News about actual exploits

News about the Shellshock vulnerability in general

Syndicated 2014-09-25 21:21:59 from Code.DanYork.Com

25 Sep 2014 Stevey   » (Master)

Today I mostly removed python

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Syndicated 2014-09-25 19:11:19 from Steve Kemp's Blog

25 Sep 2014 crhodes   » (Master)

code walking for pipe sequencing

Since it seems still topical to talk about Lisp and code-transformation macros, here’s another worked example – this time inspired by the enthusiasm for the R magrittr package.

The basic idea behind the magrittr package is, as Hadley said at EARL2014, to convert from a form of code where arguments to the same function are far apart to one where they’re essentially close together; the example he presented was converting

  arrange(
  summarise
    group_by(
      filter(babynames, name == "Hadley"),
      year),
    total = sum(n)
  desc(year))

to

  b0 <- babynames
b1 <- filter(b0, name == "Hadley")
b2 <- group_by(b1, year)
b3 <- summarise(b2, total = sum(n))
b4 <- arrange(b3, desc(year))

only without the danger of mistyping one of the variable names along the way and failing to perform the computation that was intended.

R, as I have said before, is a Lisp-1 with weird syntax and wacky evaluation semantics. One of the things that ordinary user code can do is inspect the syntactic form of its arguments, before evaluating them. This means that when looking at a fragment of code such as

  foo(bar(2,3), 4)

where a call-by-value language would first evaluate bar(2,3), then call foo with two arguments (the value resulting from the evaluation, and 4), R instead uses a form of call-by-need evaluation, and also provides operators for inspecting the promise directly. This means R users can do such horrible things as

  foo <- function(x) {
    tmp <- substitute(x)
    sgn <- 1
    while(class(tmp) == "(") {
        tmp <- tmp[[2]]
        sgn <- sgn * -1
    }
    sgn * eval.parent(tmp)
}
foo(3) # 3
foo((3)) # -3
foo(((3))) # 3
foo((((3)))) # -3 (isn’t this awesome?  I did say “wacky”)

In the case of magrittr, the package authors have taken advantage of this to invent some new syntax; the pipe operator %>% is charged with inserting its first argument (its left-hand side, in normal operation) as the first argument to the call of its second argument (right-hand side). Hadley’s example is

  babynames %>%
  filter(name == "Hadley") %>%
  group_by(year) %>%
  summarise(total = sum(n)) %>%
  arrange(desc(year))

and this is effective because the data flow in this case really is a pipeline: there's a dataset, which needs filtering, then grouping, then summarization, then sorting, and each operation works on the result of the previous. This already needs to inspect the syntactic form of the argument; an additional feature is recognizing the presence of .s in the call, and placing the left-hand side value in that argument position instead of as the first argument if it is present.

In Common Lisp, there are some piping or chaining operators out there (e.g. one two three (search for ablock) four and probably many others), and they do well enough. However! They mostly suffer from similar problems that we’ve seen before: doing code transformations with not quite enough understanding of the semantics of the code that they’re transforming; again, that’s fine for normal use, but for didactic purposes let’s pretend that we really care about this.

The -> macro from http://stackoverflow.com/a/11080068 is basically the same as the magrittr %>% operator: it converts symbols in the pipeline to function calls, and places the result of the previous evaluation as the first argument of the current operator, except if a $ is present in the arguments, in which case it replaces that. (This version doesn’t support more than one $ in the argument list; it would be a little bit of a pain to support that, needing a temporary name, but it’s straightforward in principle).

Since the -> macro does its job, a code-walker implementation isn’t strictly necessary: pure syntactic manipulation is good enough, and if it’s used with just the code it expects, it will do it well. It is of course possible to express what it does using a code-walker; we’ll fix the multiple-$ ‘bug’ along the way, by explicitly introducing bindings rather than replacements of symbols:

  (defmacro -> (form &body body)
  (labels ((find-$ (form env)
             (sb-walker:walk-form form env
              (lambda (f c e)
                (cond
                  ((eql f '$) (return-from find-$ t))
                  ((eql f form) f)
                  (t (values f t)))))
             nil)
           (walker (form context env)
             (cond
               ((symbolp form) (list form))
               ((atom form) form)
               (t (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
    `(let (($ ,form))
       ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) body))))

How to understand this implementation? Well, clearly, we need to understand what sb-walker:walk does. Broadly, it calls the walker function (its third argument) on successive evaluated subforms of the original form (and on variable names set by setq); the primary return value is used as the interim result of the walk, subject to further walking (macroexpansion and walking of its subforms) except if the second return value from the walker function is t.

Now, let’s start with the find-$ local function: its job is to walk a form, and returns t if it finds a $ variable to be evaluated at toplevel and nil otherwise. It does that by returning t if the form it’s given is $; otherwise, if the form it’s given is the original form, we need to walk its subforms, so return f; otherwise, return its form argument f with a secondary value of t to inhibit further walking. This operation is slightly at odds with the use of a code walker: we are explicitly not taking advantage of the fact that it understands the semantics of the code it’s walking. This might explain why the find-$ function itself looks a bit weird.

The walker local function is responsible for most of the code transformation. It binds $ to the value of the first form, then repeatedly sets $ to the value of successive forms, rewritten to interpolate a $ in the first argument position if there isn’t one in the form already (as reported by find-$). If any of the forms is a symbol, it gets listified and subsequently re-walked. Thus

  (macroexpand-1 '(-> "THREE" string-downcase (char 0)))
; => (LET (($ "THREE"))
;      (SETQ $ (STRING-DOWNCASE $))
;      (SETQ $ (CHAR $ 0))),
;    T

So far, so good. Now, what could we do with a code-walker that we can’t without? Well, the above implementation of -> supports chaining simple function calls, so one answer is “chaining things that aren’t just function calls”. Another refinement is to support eliding the insertion of $ when there are any uses of $ in the form, not just as a bare argument. Looking at the second one first, since it’s less controversial:

  (defmacro -> (form &body body)
  (labels ((find-$ (form env)
             (sb-walker:walk-form form env
              (lambda (f c e)
                (cond
                  ((and (eql f '$) (eql c :eval))
                   (return-from find-$ t))
                  (t f))))
             nil)
           (walker (form context env)
             (cond
               ((symbolp form) (list form))
               ((atom form) form)
               (t (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
    `(let (($ ,form))
       ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) body))))

The only thing that’s changed here is the definition of find-$, and in fact it’s a little simpler: the task is now to walk the entire form and find uses of $ in an evaluated position, no matter how deep in the evaluation. Because this is a code-walker, this will correctly handle macros, backquotes, quoted symbols, and so on, and this allows code of the form

  (macroexpand-1 '(-> "THREE" string-downcase (char 0) char-code (complex (1+ $) (1- $))))
; => (LET (($ "THREE"))
;      (SETQ $ (STRING-DOWNCASE $))
;      (SETQ $ (CHAR-CODE $))
;      (SETQ $ (COMPLEX (1+ $) (1- $)))),
;    T

which, as far as I can tell, is not supported in magrittr: doing 3 %>% complex(.+1,.-1) is met with the error that “object '.' not found”. Supporting this might, of course, not be a good idea, but at least the code walker shows that it’s possible.

What if we wanted to augment -> to handle binding forms, or special forms in general? This is probably beyond the call of duty, but let’s just briefly imagine that we wanted to be able to support binding special variables around the individual calls in the chain; for example, we want

  (-> 3 (let ((*random-state* (make-random-state))) rnorm) mean)

to expand to

  (let (($ 3))
  (setq $ (let ((*random-state* (make-random-state))) (rnorm $)))
  (setq $ (mean $)))

and let us also say, to make it interesting, that uses of $ in the bindings clauses of the let should not count against inhibiting the insertion of $ in the first argument position of the first form in the body of the let, so

  (-> 3 (let ((y (1+ $))) (atan y)))

should expand to

  (let (($ 3)) (setq $ (let ((y (1+ $))) (atan $ y))))

So our code walker needs to walk the bindings of the let, merely collecting information into the walker’s lexical environment, then walk the body performing the same rewrite as before. CHALLENGE ACCEPTED:

  (defmacro -> (&body forms)
  (let ((rewrite t))
    (declare (special rewrite))
    (labels ((find-$ (form env)
               (sb-walker:walk-form form env
                (lambda (f c e)
                  (cond
                    ((and (eql f '$) (eql c :eval))
                     (return-from find-$ t))
                    (t f))))
               nil)
             (walker (form context env)
               (declare (ignore context))
               (typecase form
                 (symbol (if rewrite (list form) form))
                 (atom form)
                 ((cons (member with-rewriting without-rewriting))
                  (let ((rewrite (eql (car form) 'with-rewriting)))
                    (declare (special rewrite))
                    (values (sb-walker:walk-form (cadr form) env #'walker) t)))
                 ((cons (member let let*))
                  (unless rewrite
                    (return-from walker form))
                  (let* ((body (member 'declare (cddr form)
                                       :key (lambda (x) (when (consp x) (car x))) :test-not #'eql))
                         (declares (ldiff (cddr form) body))
                         (rewritten (sb-walker:walk-form
                                     `(without-rewriting
                                          (,(car form) ,(cadr form)
                                            ,@declares
                                            (with-rewriting
                                                ,@body)))
                                     env #'walker)))
                    (values rewritten t)))
                 (t
                  (unless rewrite
                    (return-from walker form))
                  (if (find-$ form env)
                      (values `(setq $ ,form) t)
                      (values `(setq $ ,(list* (car form) '$ (cdr form))) t))))))
      `(let (($ ,(car forms)))
         ,@(mapcar (lambda (f) (sb-walker:walk-form f nil #'walker)) (cdr forms))))))

Here, find-$ is unchanged from the previous version; all the new functionality is in walker. How does it work? The default branch of the walker function is also unchanged; what has changed is handling of let and let* forms. The main trick is to communicate information between successive calls to the walker function, and turn the rewriting on and off appropriately: we wrap parts of the form in new pseudo-special operators with-rewriting and without-rewriting, which is basically a tacky and restricted implementation of compiler-let – if we needed to, we could do a proper one with macrolet. Within the scope of a without-rewriting, walker doesn’t do anything special, but merely return the form it was given, except if the form it’s given is a with-rewriting form. This is a nice illustration, incidentally, of the idea that lexical scope in the code translates nicely to dynamic scope in the compiler; I can’t remember where I read that first (but it’s certainly not a new idea).

And now

  (macroexpand '(-> 3 (let ((*random-state* (make-random-state))) rnorm) mean))
; => (LET (($ 3))
;      (LET ((*RANDOM-STATE* (MAKE-RANDOM-STATE)))
;        (SETQ $ (RNORM $)))
;      (SETQ $ (MEAN $))),
;    T
(macroexpand '(-> 3 (let ((y (1+ $))) (atan y))))
; => (LET (($ 3))
;      (LET ((Y (1+ $)))
;        (SETQ $ (ATAN $ Y)))),
;    T

Just to be clear: this post isn’t advocating a smarter pipe operator; I don’t have a clear enough view, but I doubt that the benefits of the smartness outweigh the complexity. It is demonstrating what can be done, in a reasonably controlled way, using a code-walker: ascribing semantics to fragments of Common Lisp code, and combining those fragments in a particular way, and of course it’s another example of sb-walker:walk in use.

Finally, if something like this does in fact get used, people sometimes get tripped up by the package system: the special bits of syntax are symbols, and importing or package-qualifying -> without doing the corresponding thing to $ would lead to cryptic errors, wrong results and/or confusion. One possibility to handle that is to invent a bit more reader syntax:

  (set-macro-character #\¦
 (defun pipe-reader (stream char)
   (let ((*readtable* (copy-readtable)))
     (set-macro-character #\·
      (lambda (stream char)
        (declare (ignore stream char))
        '$) t)
   (cons '-> (read-delimited-list char stream t)))) nil)
¦"THREE" string-downcase (find-if #'alpha-char-p ·) char-code¦

If this is the exported syntax, it has the advantage that the interface can only be misused intentionally: the actual macro and its anaphoric symbol are both hidden from the programmer; and the syntax is reasonably easy to type – on my keyboard ¦ is AltGr+| and · is AltGr+. – and moderately mnemonic from shell pipes and function notation respectively. It also has all the usual disadvantages of reader-based interfaces, such as composability, somewhat mitigated if pipe-reader is part of the macro’s exported interface.

Syndicated 2014-09-25 14:01:45 from notes

24 Sep 2014 yosch   » (Master)

FontLab VI demo at AtypI2014 Barcelona: new drawing features, smarter workflows and better interop with native UFO support and fontgate cross-platform library

During AtypI2014 in Barcelona, Thomas Phinney invited some participants to a special evening presenting the upcoming FontLab VI based on the Victoria re-write and re-architecturing that has been in the works for a few years. (BTW, if you missed it, there is a public video recording from part of a similar talk/demo at AtypI2013 Amsterdam.)

These are the notes I jotted down during the demo evening:

drawing-related features:
  • on-canvas editing of multiple glyphs at the same time
  • smart multi-selection of BCPs
  • drag'n'drop and rich copy'n'paste directly on the canvas
  • dedicated sketchboard to emulate paper-sketching
  • import bitmaps assets and trace directly on canvas
  • smart zooming, scrolling and infinite canvas
  • lasso selection
  • smart guidelines and snapping
  • sliding beziers points (g2 continuity)
  • special selection to move two BCPs at the same time with automatic harmonizing (Tuni line)
  • eraser for more natural point simplification directly on canvas
  • linked clones, with each change propagating independently
  • smart anchors expressed using fractional coordinates with keywords and autosuggested formulas for transforms and boolean operations
  • glue tool to copy only a portion of an outline and a few BCPs
  • in-place measuring tool
  • preview waterfall panel


workflow-related features:
  • context-sensitive side panels to declutter the interface (TAB key hides them quickly)
  • font comparison tool with multiple layers
  • zip file containing assets and font sources can be imported directly
  • easier navigation of character groups and unicode blocks
  • bookmark and history panel as you navigate into your existing and desired blocks
  • in-place OpenType feature editing with an advanced source code widget
  • support for multiple monitors
  • exporting your workspace to PDF and SVG
  • Harfbuzz integration for high-end realistic rendering of OpenType features
  • ClearType integration for realistic rendering (no need to export to Windows for testing)
  • git integration with commands in a dedicated menu with the goal of enabling better tracking of changes with visual diffing


interoperability-related features:
  • native support of UFO2 and UFO3, both for import and export
  • improved python APIs, compatible with robofab
  • full exposing of the APIs via QT UI designer


Soooo, plenty of great new features both in the UI, around the new workflows and in the internal engine but still no release schedule. The private beta program has yet to start. They kept talking of a codebase in alpha stage. Maybe a public beta program will happen as well...

Being made with QT, cross platform porting is now much easier. FontLab is being developed on OSX and tested there primarily but the codebase for the Windows version is only 3% different. The main developer said that a Linux version is doable but there is no definite plan or decision made in that area yet. Until other more open editors catch up, FontLab is still the (albeit proprietary) industry heavyweight. Many people are looking forward to the new features... if they haven't switched yet that is.

Glyphs is the editor most people start with nowadays - including at the MATD in Reading and it's getting glowing reviews and wider support from various parts of the typeface design community. FontLab should be seeing the glyphs on the wall (!) and hurrying up the release. The announcements will probably appear on the forum and the blog.

I've been promised a Debian/Ubuntu version of fontgate for testing server-side interop between font formats, testing, generation with python bindings. This would be fantastic for bridging FontLab with newer, more collaborative workflows and other tools in the OFDK and would increase value in FontLab's UI features. Wait and see...

24 Sep 2014 etbe   » (Master)

Cheap 3G Data in Australia

The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

Related posts:

  1. Changing Phone Prices in Australia 18 months ago when I signed up with Virgin Mobile...
  2. Cheap Net Access in Australia The cheapest ADSL or Cable net access in Australia seems...
  3. Aldi Changes, Cheap Telcos, and Estimating Costs I’ve been using Aldi as my mobile phone provider for...

Syndicated 2014-09-24 07:06:05 from etbe - Russell Coker

24 Sep 2014 mjg59   » (Master)

My free software will respect users or it will be bullshit

I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Syndicated 2014-09-24 06:59:09 from Matthew Garrett

24 Sep 2014 dmarti   » (Master)

Treasuring clicks, trashing content

Matt Harty from Experian writes, Marketers Buy Clicks But Don’t Understand What They Get. More:

Clicks usually do not bring any other information with them. When the click hits the marketer’s site, the ability to value the differences (and related potential ROI) between these visitors is minimal.

Harty's proposed solution, not surprisingly, is to add another layer of Big Data intermediaries, to sell information about the users behind those clicks. This one will fix it for sure, right? But does online advertising have to be just a matter of piling up more and more layers of companies selling expensive math and sneakily-acquired PII?

If only there were something that you could attach an ad to, some work that people who were interested in a certain topic would naturally see as valuable and want to spend time with. Something that would make an ad pay its own way, by sending the message, as Kevin Simler put it, Here an ad conveys valuable information simply by existing.

Yes, paying for something valuable to run the ad on would cost money, but that's part of how advertising really works. Advertising done right pays its way by carrying a signal to prospective buyers, one that they have an incentive to receive and process, not block. Simler also points out a kind of meta-signaling, or "cultural imprinting." When a brand establishes itself, it helps its customers send their own signals.

[B]rands carve out a relatively narrow slice of brand-identity space and occupy it for decades. And the cultural imprinting model explains why. Brands need to be relatively stable and put on a consistent "face" because they're used by consumers to send social messages, and if the brand makes too many different associations, (1) it dilutes the message that any one person might want to send, and (2) it makes people uncomfortable about associating themselves with a brand that jumps all over the place, firing different brand messages like a loose cannon.

Advertising isn't just a game of spam vs. spam filter, popup vs. popup blocker, and cookie vs. Privacy Badger. There's more to it than that, or there can be.

Meanwhile, Bob Hoffman writes,

Content is everything, and it's nothing. It's an artificial word thrown around by people who know nothing, describing nothing.

Good point. The audience's perception of how much it cost to place an ad is the way that the ad acquires its signaling power. The ad-supported resource, whether it's a TV show, an article with photos, or a story, amplifies the ad by its quality and apparent cost.

A famous byline on a magazine cover increases the magazine's reputation, which increases the signaling power of the ads inside, which makes ad space more valuable. Get a reputation for paying well, get more money from advertisers, and so on. Do it right and the more you pay people, the more advertisers pay you, the more you can pay people. (This is the positive feedback loop that pro sports is in. And not only is the sports audience not the product being sold, the audience is paying to be advertised to.)

Signaling through quality editorial product is the opportunity that online advertising is thowing away, by programmatically buying ad units attatched to crappy, infringing, or outright fraudulent "content". Somehow, people have gotten the idea that math matters, user data matters, but "content" doesn't.

What's the alternative? Some ideas at What can brands do now? and Solutions.

Bonus links

Malvertising Campaign Employs the Nuclear Option on Zedo A malicious Javascript file, unintentionally served last week by the Zedo advertising network, redirected victims to the Nuclear exploit kit which (under the right circumstances) delivered a punishing series of infections onto PCs.

Einbinder Flypaper, The brand you've gradually grown to trust over the course of three generations.

Syndicated 2014-09-24 03:27:52 from Don Marti

24 Sep 2014 robertc   » (Master)

what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


Syndicated 2014-09-24 04:13:44 from Code happens

23 Sep 2014 crhodes   » (Master)

earl conference

This week, I went to the Effective Applications of the R Language conference. I’d been alerted to its existence from my visit to a londonR meeting in June. Again, I went for at least two reasons: one as an R enthusiast, though admittedly one (as usual) more interested in tooling than in applications; and one as postgraduate coordinator in Goldsmiths Computing, where for one of our modules in the new Data Science programme (starting todayyesterday! Hooray for “Welcome Week”!) involves exposing students to live data science briefs from academia and industry, with the aim of fostering a relevant and interesting final project.

A third reason? Ben Goldacre as invited speaker. A fantastic choice of keynote, even if he did call us ‘R dorks’ a lot and confess that he was a Stata user. The material, as one might expect, was derived from his books and related experience, but the delivery was excellent, and the drive clear to see. There were some lovely quotes that it’s tempting to include out of context; in the light of my as-yet unposed ‘research question’ for the final module of the PG Certificate in Higher Education – that I am still engaged on – it is tempting to bait the “infestation of qualitative researchers in the educational research establishment”, and attempt a Randomised Controlled Trial, or failing that a statistical analysis of assessments to try to uncover suitable hypotheses for future testing.

Ben’s unintended warm-up act – they were the other way around in the programme, but clearly travelling across London is complicated – was Hadley Wickham, of ggplot2 fame. His talk, about RStudio, his current view on the data analysis workflow, and new packages to support it, was a nice counterpoint: mostly tools, not applications, but clearly focussed to help make sense of complicated (and initially untidy) datasets. I liked the shiny-based in-browser living documents in R-markdown, which is not a technology that I’ve investigated yet; at this rate I will have more options for reproducible research than reports written. He, and others at the conference, were advocating a pipe-based code sequencing structure – the R implementation of this is called magrittr (ha, ha) and has properties that are made possible to user code through R’s nature of a Lisp-1 with crazy evaluation semantics, on which more in another post.

The rest of the event was made up of shorter, usually more domain-specific talks: around 20 minutes for each speaker. I think it suffered a little bit from many of the participants not being able to speak freely – a natural consequence of a mostly-industrial event, but frustrating. I think it was also probably a mistake to schedule in the first one of the regular (parallel) sessions of the event a reflective slot for three presentations about comparing R with other languages (Python, Julia, and a more positive one about R’s niche): there hadn’t really been time for a positive tone to be established, and it just felt like a bit of a downer. (Judging by the room, most of the delegates – perhaps wisely – had opted for the other track, on “Business Applications of R”).

Highlights of the shorter talks, for me:

  • YPlan’s John Sandall talking about “agile” data analytics, leading to agile business practices relentlessly focussed on one KPI. At the time, I wondered whether the focus on the first time a user gives them money would act against building something of lasting value – analytics give the power to make decisions, but the underlying strategy still has to be thought about. On the other hand, I’m oh-too familiar with the notion that startups must survive first and building something “awesome” is a side-effect, and focussing on money in is pretty sensible.
  • Richard Pugh’s (from Mango Solutions) talk about modelling and simulating the behaviour of a sales team did suffer from the confidentiality problem (“I can’t talk about the project this comes from, or the data”) but was at least entertaining: the behaviours he talked about (optimistic opportunity value, interactions of CRM closing dates with quarter boundaries) were highly plausible, and the question of whether he was applying the method to his own sales team quite pointed. (no)
  • the team from Simpson Carpenter Ltd, as well as saying that “London has almost as many market research agencies as pubs” (which rings true) had what I think is a fair insight: R is perhaps less of a black-box than certain commercial tools; there’s a certain retrocomputing feel to starting R, being at the prompt, and thinking “now what?” That implies that to actually do something with R, you need to know a bit more about what you’re doing. (That didn’t stop a few egregiously bad graphs being used in other presentations, including my personal favourite of a graph of workflow expressed as business value against time, with the inevitable backwards-arrows).
  • some other R-related tools to look into:

And then there was of course the hallwaybreak room track; Tower Hotel catered admirably for us, with free-flowing coffee, nibbles and lunch. I had some good conversations with a number of people, and am optimistic that students with the right attitude could both benefit and gain hugely from data science internships. I’m sure I was among the most tool-oriented of the attendees (most of the delegates were actually using R), but I did get to have a conversation with Hadley about “Advanced R”, and we discussed object systems, and conditions and restarts. More free-form notes about the event on my wiki.

Meanwhile, in related news, parts of the swank backend implementation of SLIME changed, mostly moving symbols to new packages. I've updated swankr to take account of the changes, and (I believe, untested) preserved compatibility with older (pre 2014-09-13) SLIMEs.

Syndicated 2014-09-23 10:08:42 (Updated 2014-09-23 20:33:50) from notes

23 Sep 2014 Stevey   » (Master)

Waiting for features upstream

I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.

One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.

I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.

On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).

Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.

Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:

  $ find ~/foo -xattr user.comment
/home/skx/foo/bar/t.txt
/home/skx/foo/bar/xc.txt
/home/skx/foo/bar/x.txt

There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.

Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)

Syndicated 2014-09-23 20:42:56 from Steve Kemp's Blog

23 Sep 2014 yosch   » (Master)

AFDKO progress

It's good to see that the recently re-released AFDKO is starting to get some attention and (small) things are starting to get merged back in.

There is packaging work underway by ChangZhuo Chen (陳昌倬) from the Debian pkg-fonts team.

There are still various issues to deal with for this codebase to be brought in line with Debian policies, but I was able to successfully rebuild Adobe Source Serif and Adobe Source Sans on Ubuntu 14.04.

This means we are now closer to the long-term goal of a containerized, autobuildable, open-standards-based crossplatform buildpath for complex fonts. New development and testing workflows will be much easier to integrate, so that's good news for everyone :-)

23 Sep 2014 yosch   » (Master)

Open and collaborative font design in a web fonts world: AtypI2013 Amsterdam presentation and panel on open fonts by Victor Gaultney and font industry representatives

Even if you didn't attend the AtypI2013 conference in Amsterdam, you can now watch the video recording of "Open and collaborative font design in a web fonts world" a presentation Victor Gaultney followed by a discussion panel with various key font industry representatives.

Thanks to the video team for their efforts in making more of these AtypI presentations publicly available!

23 Sep 2014 lucasr   » (Master)

New Features in Picasso

I’ve always been a big fan of Picasso, the Android image loading library by the Square folks. It provides some powerful features with a rather simple API.

Recently, I started working on a set of new features for Picasso that will make it even more awesome: request handlers, request management, and request priorities. These features have all been merged to the main repo now. Let me give you a quick overview of what they enable you to do.

Request Handlers

Picasso supports a wide variety of image sources, from simple resources to content providers, network, and more. Sometimes though, you need to load images in unconventional ways that are not supported by default in Picasso.

Wouldn’t it be nice if you could easily integrate your custom image loading logic with Picasso? That’s what the new request handlers are about. All you need to do is subclass RequestHandler and implement a couple of methods. For example:

public class PonyRequestHandler extends RequestHandler {
    private static final String PONY_SCHEME = "pony";

    @Override public boolean canHandleRequest(Request data) {
        return PONY_SCHEME.equals(data.uri.getScheme());
    }

    @Override public Result load(Request data) {
         return new Result(somePonyBitmap, MEMORY);
    }
}

Then you register your request handler when instantiating Picasso:

Picasso picasso = new Picasso.Builder(context)
    .addRequestHandler(new PonyHandler())
    .build();

Voilà! Now Picasso can handle pony URIs:

Picasso.with(context)
       .load("pony://somePonyName")
       .into(someImageView)

This pull request also involved rewriting all built-in bitmap loaders on top of the new API. This means you can also override the built-in request handlers if you need to.

Request Management

Even though Picasso handles view recycling, it does so in an inefficient way. For instance, if you do a fling gesture on a ListView, Picasso will still keep triggering and canceling requests blindly because there was no way to make it pause/resume requests according to the user interaction. Not anymore!

The new request management APIs allow you to tag associated requests that should be managed together. You can then pause, resume, or cancel requests associated with specific tags. The first thing you have to do is tag your requests as follows:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .tag(someTag)
       .into(someImageView)

Then you can pause and resume requests with this tag based on, say, the scroll state of a ListView. For example, Picasso’s sample app now has the following scroll listener:

public class SampleScrollListener implements AbsListView.OnScrollListener {
    ...
    @Override
    public void onScrollStateChanged(AbsListView view, int scrollState) {
        Picasso picasso = Picasso.with(context);
        if (scrollState == SCROLL_STATE_IDLE ||
            scrollState == SCROLL_STATE_TOUCH_SCROLL) {
            picasso.resumeTag(someTag);
        } else {
            picasso.pauseTag(someTag);
        }
    }
    ...
}

These APIs give you a much finer control over your image requests. The scroll listener case is just the canonical use case.

Request Priorities

It’s very common for images in your Android UI to have different priorities. For instance, you may want to give higher priority to the big hero image in your activity in relation to other secondary images in the same screen.

Up until now, there was no way to hint Picasso about the relative priorities between images. The new priority API allows you to tell Picasso about the intended order of your image requests. You can just do:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .priority(HIGH)
       .into(someImageView);

These priorities don’t guarantee a specific order, they just tilt the balance towards higher-priority requests.


That’s all for now. Big thanks to Jake Wharton and Dimitris Koutsogiorgas for the prompt code and API reviews!

You can try these new APIs now by fetching the latest Picasso code on Github. These features will probably be available in the 2.4 release. Enjoy!

Syndicated 2014-09-23 15:52:54 from Lucas Rocha

23 Sep 2014 marnanel   » (Journeyer)

Gentle Readers: inheritance powder

Gentle Readers
a newsletter made for sharing
volume 2, number 3
22nd September 2014: inheritance powder

What I’ve been up to

Firstly, a very happy birthday to my (no longer little!) brother Andrew, who is rather younger than eleventy-one today.

As for me: I'm still ill, still working on getting better. Here's a story: a few months ago I was hit by a car when crossing the road. I escaped with only a sprained ankle and bruised ribs, but I was so anxious to get over it that I ignored much of the advice about keeping my ankle iced and raised. Instead, I took painkillers and went on with my everyday life. This certainly had its problems in the short term-- I attempted to carry a powered wheelchair through a doorway, put weight on my bad leg, and ended up dislocating my shoulder-- but I suspect it made the sprain slower to heal as well. And now I'm thinking about this as a metaphor for healing in general. What are the equivalents of ice and elevation, for example, in living with chronic depression?

A poem of mine

REQUIEM FOR AN OAK

I thought I saw an execution there.
The fascinated public gathered round.
The cheerful hangmen stripped the victim bare
And built their gibbet high above the ground.
The rope was taut, my wildness filled with fear.
I saw him fall. I heard his final cry.
Yet when the hangmen left I ventured near
To find my fault: I'd never seen him die.
In fact, I think he'd died some years ago.
There's blackness of decay in every breath.
The sound of flies was all that's left to grow,
Now free to come and feast upon his death;
Prince of the trees, I have a simple plea:
I will not die till death has come to me.

A picture


http://gentlereaders.uk/pics/sheep-worryingDog, to sheep: "I saw the farmer making mint sauce."
Caption: My dog has been sheep-worrying.

Something wonderful

In 1800, there lived in Berlin a young woman named Sophie Ursinus. She was married to a senior politician, who was much older, and (possibly at his suggestion) she had a boyfriend, who was an officer in the Dutch army. Between 1800 and 1801, both her husband and her boyfriend died suddenly; so did her elderly aunt, leaving her a good deal of money. No questions were asked. But in 1803, shortly after Mrs Ursinus argued with her servant, he became ill, and became suspicious; he took the plums she had given him to a friendly chemist, who confirmed that they appeared to have been laced with arsenic. The law was called in.

But there was then no reliable test for arsenic, and the pathologists could not confirm beyond a reasonable doubt that the exhumed body of her husband contained the poison, any more than it could have been detected at his post-mortem. Fortunately they were more sure when they examined the body of her aunt, and so Mrs Ursinus was sent to prison for thirty years.

Arsenic was nearly the perfect poison: readily obtainable if you claim you're trying to kill rats, easily administered by mixing into your victim's drink, causing symptoms plausibly similar to those of various then-common illnesses such as cholera, and-- should you be found out in the end-- almost undetectable in the body by any reliable test. So many people used it to remove rich and elderly relatives who had survived inconveniently long that it became euphemistically known as "inheritance powder".

In 1832 a man named John Bodle was accused of murdering his grandfather by putting arsenic in his coffee, and the prosecution called a chemist named James Marsh as an expert witness. Marsh discovered arsenic in the body, using the test developed by the homeopath (!) Samuel Hahnemann, which was the best available method at the time. But a positive result with Hahnemann's test deteriorates so fast that by the time of the trial the jury were not convinced, and Bodle was acquitted; he confessed his guilt as soon as he was protected by double jeopardy. Marsh was stung, and set out to discover a reliable test for arsenic.

He found one, and published it in 1838: it has become known as the Marsh test. It builds upon the previous work of Carl Scheele, who had shown in 1775 that arsine gas (AsH3) would result from treating arsenic with zinc and nitric acid. Marsh's breakthrough was to set fire to the arsine gas in the presence of charcoal, producing arsenic and water vapour, and staining the vessel with a silvery-black colour that came to be known as "arsenic mirror". (I apologise to my chemist readers if I have misunderstood any of this, and invite corrections.) Marsh's idea had its first successful outing in 1840, in the trial of a French poisoner named Marie Lafarge; so widely was this success reported in the news that poisoning one's relatives with arsenic became passé almost overnight.
 

http://thomasthurman.org/pics/marsh-test
Marsh and his test

One interesting footnote: modern detective fiction began in 1841, with Edgar Allen Poe's story The Murders in the Rue Morgue. I doubt there's any direct connection, but the timing amuses me: detective fiction would be far less interesting with the easy availability of undetectable poisons!

Something from someone else

LUCIFER IN STARLIGHT
by George Meredith (1828-1909)

On a starred night Prince Lucifer uprose.
Tired of his dark dominion swung the fiend
above the rolling ball, in cloud part screened,
where sinners hugged their spectre of repose.
Poor prey to his hot fit of pride were those.
And now upon his western wing he leaned,
now his huge bulk o'er Afric's sands careened,
now the black planet shadowed Arctic snows.
     Soaring through wider zones that pricked his scars
     with memory of the old revolt from awe,
     he reached a middle height, and at the stars,
     which are the brain of heaven, he look'd, and sank.
Around the ancient track marched, rank on rank,
the army of unalterable law.

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at http://gentlereaders.uk/ , and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. Love and peace to you all.
 
 

This entry was originally posted at http://marnanel.dreamwidth.org/313263.html. Please comment there using OpenID.

Syndicated 2014-09-23 02:25:33 from Monument

22 Sep 2014 dmarti   » (Master)

QoTD: Giles Bowkett

The Agile Manifesto might also be to blame for the Scrum standup. It states that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation." In fairness to the manifesto's authors, it was written in 2001, and at that time git log did not yet exist. However, in light of today's toolset for distributed collaboration, it's another completely implausible assertion, and even back in 2001 you had to kind of pretend you'd never heard of Linux if you really wanted it to make sense.

Giles Bowkett

Syndicated 2014-09-22 04:21:47 from Don Marti

22 Sep 2014 bagder   » (Master)

daniel.haxx.se week #3

I won’t keep posting every video update here, but I mostly wanted to mention that I’ve kept posting a weekly video over at youtube basically explaining what’s going on right now within my dearest projects. Mostly curl and some Firefox stuff.

This week: libcurl server cert verification API got a bashing at SEC-T, is HTTP for UDP a good idea? How about adding HTTP cache support to libcurl? HTTP/2 is getting deployed as we speak. Interesting curl bug when used by XBMC. The patch series for Firefox bug 939318 is improving slowly – will it ever land?