Recent blog entries

30 Oct 2014 bagder   » (Master)

Changing networks on Mac with Firefox

Not too long ago I blogged about my work to better deal with changing networks while Firefox is running. That job was basically two parts.

A) generic code to handle receiving such a network-changed event and then

B) a platform specific part that was for Windows that detected such a network change and sent the event

Today I’ve landed yet another fix for part B called bug 1079385, which detects network changes for Firefox on Mac OS X.

mac miniI’ve never programmed anything before on the Mac so this was sort of my christening in this environment. I mean, I’ve written countless of POSIX compliant programs including curl and friends that certainly builds and runs on Mac OS just fine, but I never before used the Mac-specific APIs to do things.

I got a mac mini just two weeks ago to work on this. Getting it up, prepared and my first Firefox built from source took all-in-all less than three hours. Learning the details of the mac API world was much more trouble and can’t say that I’m mastering it now either but I did find myself at least figuring out how to detect when IP addresses on the interfaces change and a changed address is a pretty good signal that the network changed somehow.

Syndicated 2014-10-30 21:46:20 from daniel.haxx.se

30 Oct 2014 mjg59   » (Master)

Hacker News metrics (first rough approach)

I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:

  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:








Keyword Penalised Unpenalised
Women 13 4
Harass 2 0
Female 5 1
Intel 2 3
x86 3 4
ARM 3 4
Airplane 1 2
Startup 46 26


A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments

Syndicated 2014-10-30 15:19:57 from Matthew Garrett

30 Oct 2014 Pizza   » (Master)

Further printer work

Okay, so I guess I was wrong about additional printer hacking. Despite the 12-hour days at the office over the past few weeks (we got our first silicon back, and software is the ring that binds everything together in the darkness), I'm still spending time writing code when I get home.

First, I added support for the Sony UP-CR10L and its rebadged bretheren, the DNP SL10. I've had these on my to-do list for a while; I'd already decoded everything and updated the existing UP-DR150/200 backend to handle the new bits, but never got around to adding proper support into Gutenprint. That's now done, and once I get the USB PIDs, it should JustWork(tm).

Beyond that, I've knocked out a few things on the bug list. One I just fixed affected pipelined printingon the DNP/Citizen printers, and it was most easily triggered by multi-page print jobs. With Gutenprint 5.2.10's backend, the printer would just abort the job after the first page, but if you were using a development snapshot after 2014-06-04, it would automatically retry the job, resulting in an endless printing of page 1 over and over again.

The bug was due to the backend mistakenly treating the "Printing, with one available buffer for a 300dpi or small 600dpi job" status as an error.

...Oops.

At least folks won't have to wait for the next Gutenprint release to pick up the latest backend code.

I have a rather large photo backlog from the past month to sort through. That will be my weekend project..

Syndicated 2014-10-30 02:38:37 from Solomon Peachy

30 Oct 2014 mjg59   » (Master)

On joining the FSF board

I joined the board of directors of the Free Software Foundation a couple of weeks ago. I've been travelling a bunch since then, so haven't really had time to write about it. But since I'm currently waiting for a test job to finish, why not?

It's impossible to overstate how important free software is. A movement that began with a quest to work around a faulty printer is now our greatest defence against a world full of hostile actors. Without the ability to examine software, we can have no real faith that we haven't been put at risk by backdoors introduced through incompetence or malice. Without the freedom to modify software, we have no chance of updating it to deal with the new challenges that we face on a daily basis. Without the freedom to pass that modified software on to others, we are unable to help people who don't have the technical skills to protect themselves.

Free software isn't sufficient for building a trustworthy computing environment, one that not merely protects the user but respects the user. But it is necessary for that, and that's why I continue to evangelise on its behalf at every opportunity.

However.

Free software has a problem. It's natural to write software to satisfy our own needs, but in doing so we write software that doesn't provide as much benefit to people who have different needs. We need to listen to others, improve our knowledge of their requirements and ensure that they are in a position to benefit from the freedoms we espouse. And that means building diverse communities, communities that are inclusive regardless of people's race, gender, sexuality or economic background. Free software that ends up designed primarily to meet the needs of well-off white men is a failure. We do not improve the world by ignoring the majority of people in it. To do that, we need to listen to others. And to do that, we need to ensure that our community is accessible to everybody.

That's not the case right now. We are a community that is disproportionately male, disproportionately white, disproportionately rich. This is made strikingly obvious by looking at the composition of the FSF board, a body made up entirely of white men. In joining the board, I have perpetuated this. I do not bring new experiences. I do not bring an understanding of an entirely different set of problems. I do not serve as an inspiration to groups currently under-represented in our communities. I am, in short, a hypocrite.

So why did I do it? Why have I joined an organisation whose founder I publicly criticised for making sexist jokes in a conference presentation? I'm afraid that my answer may not seem convincing, but in the end it boils down to feeling that I can make more of a difference from within than from outside. I am now in a position to ensure that the board never forgets to consider diversity when making decisions. I am in a position to advocate for programs that build us stronger, more representative communities. I am in a position to take responsibility for our failings and try to do better in future.

People can justifiably conclude that I'm making excuses, and I can make no argument against that other than to be asked to be judged by my actions. I hope to be able to look back at my time with the FSF and believe that I helped make a positive difference. But maybe this is hubris. Maybe I am just perpetuating the status quo. If so, I absolutely deserve criticism for my choices. We'll find out in a few years.

comment count unavailable comments

Syndicated 2014-10-30 00:45:32 from Matthew Garrett

29 Oct 2014 Hobart   » (Journeyer)

GMail locked down IMAP access at some point.

Wheee.

  • If you don't have IMAP + OAuth 2 you're locked out. Unless:
  • You change a Big Scary Setting "Allow less secure apps". The activation of which also generates a Big Scary Email to let you know you've done it. But then:
  • Your failed attempts triggered another lock on your account, which you need to inspect the IMAP negotiation to see. The first claims "Web login required! go to http://blahblah/100char-long-url", but, surprise! visiting the URL doesn't unlock you.
  • The second directs you to https://support.google.com/mail/answer/78754 where you learn about https://www.google.com/accounts/DisplayUnlockCaptcha which, when visited, does NOT display a CAPTCHA, but does unlock your account.
  • OAuth 2 is so ridiculously overdesigned the main editor of the spec loudly quit.
  • All of this could have been handled using client side certificates, without requiring any changes to the @#$% mail clients.
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1
owGbwMvMwCSYyMCz7/ket1DG0+JJDCGBPvHuqSUK+WlpCrmVCjmJ5Xl6XB1uLAyC
TAxsrEwgaQYuTgGYHmsehgV/ekp/rlgx7dujh3trJkrvClXyWv+AYcGRNO89b/xt
xVd7q/u4lR2pjozkUwMA
=rXE4
-----END PGP MESSAGE-----
Some relevant URLs:

Syndicated 2014-10-29 20:36:11 from jon's blog

29 Oct 2014 Stevey   » (Master)

A brief introduction to freebsd

I've spent the past thirty minutes installing FreeBSD as a KVM guest. This mostly involved fetching the ISO (I chose the latest stable release 10.0), and accepting all the defaults. A pleasant experience.

As I'm running KVM inside screen I wanted to see the boot prompt, etc, via the serial console, which took two distinct steps:

  • Enabling the serial console - which lets boot stuff show up
  • Enabling a login prompt on the serial console in case I screw up the networking.

To configure boot messages to display via the serial console, issue the following command as the superuser:

   # echo 'console="comconsole"' >> /boot/loader.conf

To get a login: prompt you'll want to edit /etc/ttys and change "off" to "on" and "dialup" to "vt100" for the ttyu0 entry. Once you've done that reload init via:

   # kill -HUP 1

Enable remote root logins, if you're brave, or disable PAM and password authentication if you're sensible:

   vi /etc/ssh/sshd_config
 /etc/rc.d/sshd restart

Configure the system to allow binary package-installation - to be honest I was hazy on why this was required, but I ran the two command and it all worked out:

   pkg
 pkg2ng

Now you may install a package via a simple command such as:

   pkg add screen

Removing packages you no longer want is as simple as using the delete option:

   pkg delete curl

You can see installed packages via "pkg info", and there are more options to be found via "pkg help". In the future you can apply updates via:

   pkg update && pkg upgrade

Finally I've installed 10.0-RELEASE which can be upgraded in the future via "freebsd-update" - This seems to boil down to "freebsd-update fetch" and "freebsd-update install" but I'm hazy on that just yet. For the moment you can see your installed version via:

   uname -a ; freebsd-version

Expect my future CPAN releases, etc, to be tested on FreeBSD too now :)

Syndicated 2014-10-29 18:37:28 from Steve Kemp's Blog

29 Oct 2014 louie   » (Master)

Understanding Wikimedia, or, the Heavy Metal Umlaut, one decade on

It has been nearly a full decade since Jon Udell’s classic screencast about Wikipedia’s article on the Heavy Metal Umlaut (current textJan. 2005). In this post, written for Paul Jones’ “living and working online” class, I’d like to use the last decade’s changes to the article to illustrate some points about the modern Wikipedia.1

Measuring change

At the end of 2004, the article had been edited 294 times. As we approach the end of 2014, it has now been edited 1,908 times by 1,174 editors.2

This graph shows the number of edits by year – the blue bar is the overall number of edits in each year; the dotted line is the overall length of the article (which has remained roughly constant since a large pruning of band examples in 2007).

Edits-by-year

 

The dropoff in edits is not unusual — it reflects both a mature article (there isn’t that much more you can write about metal umlauts!) and an overall slowing in edits in English Wikipedia (from a peak of about 300,000 edits/day in 2007 to about 150,000 edits/day now).3

The overall edit count — 2000 edits, 1000 editors — can be hard to get your head around, especially if you write for a living. Implications include:

  • Style is hard. Getting this many authors on the same page, stylistically, is extremely difficult, and it shows in inconsistencies small and large. If not for the deeply acculturated Encyclopedic Style we all have in our heads, I suspect it would be borderline impossible.
  • Most people are good, most of the time. Something like 3% of edits are “reverted”; i.e., about 97% of edits are positive steps forward in some way, shape, or form, even if imperfect. This is, I think, perhaps the single most amazing fact to come out of the Wikimedia experiment. (We reflect and protect this behavior in one of our guidelines, where we recommend that all editors Assume Good Faith.)

The name change, tools, and norms

In December 2008, the article lost the “heavy” from its name and became, simply, “metal umlaut” (explanation, aka “edit summary“, highlighted in yellow):

Name change

A few take aways:

  • Talk pages: The screencast explained one key tool for understanding a Wikipedia article – the page history. This edit summary makes reference to another key tool – the talk page. Every Wikipedia article has a talk page, where people can discuss the article, propose changes, etc.. In this case, this user discussed the change (in November) and then made the change in December. If you’re reporting on an article for some reason, make sure to dig into the talk page to fully understand what is going on.
  • Sources: The user justifies the name change by reference to sources. You’ll find little reference to them in 2005, but by 2008, finding an old source using a different term is now sufficient rationale to rename the entire page. Relatedly…
  • Footnotes: In 2008, there was talk of sources, but still no footnotes. (Compare the story about Motley Crue in Germany in 2005 and now.) The emphasis on foonotes (and the ubiquitous “citation needed”) was still a growing thing. In fact, when Jon did his screencast in January 2005, the standardized/much-parodied way of saying “citation needed” did not yet exist, and would not until June of that year! (It is now used in a quarter of a million English Wikipedia pages.) Of course, the requirement to add footnotes (and our baroque way of doing so) may also explain some of the decline in editing in the graphs above.

Images, risk aversion, and boldness

Another highly visible change is to the Motörhead art, which was removed in November 2011 and replaced with a Mötley Crüe image in September 2013. The addition and removal present quite a contrast. The removal is explained like this:

remove File:Motorhead.jpg; no fair use rationale provided on the image description page as described at WP:NFCC content criteria 10c

This is clear as mud, combining legal issues (“no fair use rationale”) with Wikipedian jargon (“WP:NFCC content criteria 10c”). To translate it: the editor felt that the “non-free content” rules (abbreviated WP:NFCC) prohibited copyright content unless there was a strong explanation of why the content might be permitted under fair use.

This is both great, and sad: as a lawyer, I’m very happy that the community is pre-emptively trying to Do The Right Thing and take down content that could cause problems in the future. At the same time, it is sad that the editors involved did not try to provide the missing fair use rationale themselves. Worse, a rationale was added to the image shortly thereafter, but the image was never added back to the article.

So where did the new image come from? Simply:

boldly adding image to lead

“boldly” here links to another core guideline: “be bold”. Because we can always undo mistakes, as the original screencast showed about spam, it is best, on balance, to move forward quickly. This is in stark contrast to traditional publishing, which has to live with printed mistakes for a long time and so places heavy emphasis on Getting It Right The First Time.

In brief

There are a few other changes worth pointing out, even in a necessarily brief summary like this one.

  • Wikipedia as a reference: At one point, in discussing whether or not to use the phrase “heavy metal umlaut” instead of “metal umlaut”, an editor makes the point that Google has many search results for “heavy metal umlaut”, and another editor points out that all of those search results refer to Wikipedia. In other words, unlike in 2005, Wikipedia is now so popular, and so widely referenced, that editors must be careful not to (indirectly) be citing Wikipedia itself as the source of a fact. This is a good problem to have—but a challenge for careful authors nevertheless.
  • Bots: Careful readers of the revision history will note edits by “ClueBot NG“. Vandalism of the sort noted by Jon Udell has not gone away, but it now is often removed even faster with the aid of software tools developed by volunteers. This is part of a general trend towards software-assisted editing of the encyclopedia.NoSwagForYou
  • Translations: The left hand side of the article shows that it is in something like 14 languages, including a few that use umlauts unironically. This is not useful for this article, but for more important topics, it is always interesting to compare the perspective of authors in different languages.Languages

I look forward to discussing all of these with the class, and to any suggestions from more experienced Wikipedians for other lessons from this article that could be showcased, either in the class or (if I ever get to it) in a one-decade anniversary screencast. :)

  1. I still haven’t found a decent screencasting tool that I like, so I won’t do proper homage to the original—sorry Jon!
  2. Numbers courtesy X’s edit counter.
  3. It is important, when looking at Wikipedia statistics, to distinguish between stats about Wikipedia in English, and Wikipedia globally — numbers and trends will differ vastly between the two.

Syndicated 2014-10-29 06:02:27 from Luis Villa » Blog

28 Oct 2014 marnanel   » (Journeyer)

street harrassment

[In a discussion on street harassment elsewhere, some dude said: "Hi [name of OP]. There, I did it. I harassed you. Oh the humanity. Do you NOT get how absurd this looks to us guys? The creeper 5 minute guy, yeah I get that. But just saying hi? Get over yourselves ladies. We have a right to say hi on public streets." This is my reply to him]

Here as everywhere else, context makes a big difference. Here's an example from my own life.

I'm male-bodied; people generally read me as a man. Earlier this year I went to a party in drag (and hey, I thought I looked rather fetching). I was walking down a busy street after dark, when someone in the shadows I couldn't quite see called out "Hello darling."

Ordinarily, I wouldn't hear that a threat. But I can tell you that in *that* context it was a moment of raw terror. All the recent newspaper stories of street assaults ran through my head. If he thinks I'm a woman, maybe he's going to assault me (hell, if he thinks I'm a man in drag, maybe he's going to assault me). By appearing female in public I had effectively painted a huge target on my back.

Now of course men get attacked in the street too. But you don't expect that sort of attack to begin with the attacker saying "hello". If someone had come up to me with a knife I'd have been terrified whether I was dressed as a woman or not. But "hello, darling" is often the start of a very different script, and I was someone who might plausibly be cast in that script in a very unpleasant role.

So I can attest to the terror it can cause when a stranger tries to greet you in the street.

This entry was originally posted at http://marnanel.dreamwidth.org/315765.html. Please comment there using OpenID.

Syndicated 2014-10-28 20:28:59 from Monument

28 Oct 2014 benad   » (Apprentice)

A Tale of Two Shells

Last year, I completed "properly" learning the bash ("Bash"?) shell, using a combination of the book "Learning the bash Shell" and reading from start to finish the gigantic "man" page. And that was enough to convince me that, regardless of its ubiquity, I don't like it much, be it as a scripting language or as a command-line shell.

Having already learned tcsh, because it was the default shell on Mac OS X and is still popular, I was ready to try out more modern shells, rather than ones stuck in the 80s.

First, on Windows, DOS is quite archaic and annoying. Simple things like sleeping for a second require unintuitive workarounds. DOS batch scripting is painfully difficult, so I was eager to find something better. And since Windows 7, this strange "PowerShell" is now installed by default, and was heralded as a revolutionary step in command-line shells. Is it?

After reading a few tutorials, including this "free ebook" on powershell.com, things became clear. Windows PowerShell is essentially a shell built atop .NET that manipulates streams of objects rather than plain text across pipes, though thankfully formats the objects as plain text on the console screen by default. It provides many "cmdlets" that can manipulate that stream in a more convenient way than grep and awk. For example, listing all files in a directory greater than a megabyte is trivial in PowerShell, while on UNIX shell requires an awkward (pun intended) combination of extracting character positions and arithmetic. The "PowerShell IDE", also provided with PowerShell, can even perform tab-completion of the fields of the objects out of the current pipe, making it easy to extract their attributes. Because so much of the Windows internals are accessible using COM and .NET, it is easy to perform system administration with it, for example installing system services and querying their status. The only major issue I've found with PowerShell is that executing PowerShell scripts is completely disabled by default until unlocked by an administrator. Also, as a minor gripe, the version of PowerShell that came out of the box with Windows 7 is quite outdated. Overall, if there was a "grand vision" of ".NET" in Windows, it is best represented by PowerShell.

Back on UNIX-like systems, it seems like users are quite happy with old shells, or at least incremental evolutions of the old "Bourne shell" and Berkeley's "C Shell". Looking around, I found "fish", the "Friendly Interactive SHell". I liked its ironic tagline of "Finally, a command line shell for the 90s", since it was initially released in 2005. It is, indeed, friendly, as it has a deliberately limited set of features, and has default out-of-the-box functionality that makes interactive use enjoyable. It was built around a comprehensive design document that explicitly favours usability over compatibility with older or popular shells. The results are spectacular: Everything has colour, TAB and arrow keys completion with a type ahead preview in light grey, inline argument completion for most commands (including "man") that present interactively all the options and their meaning automatically extracted from the "man" pages, editing the configuration through a built-in web service, applying configuration changes to all shells instantly, I could go on. Its scripting is quite limited, but that may be a good thing considering the Shellshock bug (Not that "fish" has no security holes, but at least they're not "as designed").

Personally, I am ready to move to both PowerShell and "fish" for day-to-day use. While they both don't have much in common to older shells, they are far more usable. I highly recommend to all command-line users.

Syndicated 2014-10-28 01:37:41 from Benad's Blog

27 Oct 2014 bagder   » (Master)

daniel.haxx.se episode 8

Today I hesitated to make my new weekly video episode. I looked at the viewers number and how they basically have dwindled the last few weeks. I’m not making this video series interesting enough for a very large crowd of people. I’m re-evaluating if I should do them at all, or if I can do something to spice them up…

… or perhaps just not look at the viewers numbers at all and just do what think is fun?

I decided I’ll go with the latter for now. After all, I enjoy making these and they usually give me some interesting feedback and discussions even if the numbers are really low. What good is a number anyway?

This week’s episode:

Personal

Firefox

Fun

HTTP/2

TALKS

  • I’m offering two talks for FOSDEM

curl

  • release next Wednesday
  • bug fixing period
  • security advisory is pending

wget

Contribute to Open Source from Daniel Stenberg

Syndicated 2014-10-16 21:01:58 from daniel.haxx.se

15 Oct 2014 mones   » (Journeyer)

FOSS or not FOSS, that's the question

Today in #claws IRC channel some user wanted to move away from Claws Mail to another MUA. That probably happens every day or two, so nobody really cares (I don't, at least).

Claws' storage format is MH, nothing exotic or unknown, hence there's no explicit exporting utilities, as requested by that user. Anyway one of the developers suggested mh2mbox, which seems a pretty straightforward option. Claws has also a mailmbox plugin, which can be populated with messages from MH folders, but when you have lots of them the task becomes boring :-)

Anyway, the point of this post was not the technicalities of conversion but more the ideas people has about FOSS. At some point, after some arguing about how developers doesn't listen to users and how wrong donating to the project had been, the user said:

12:54 < somebody> If I develop a system, and I want people to use it, then I 
                  have a duty to listen to people and consider to make it 
                  useable for them ... or else, they won't use it.

That's a huge misconception, probably because nobody reads the license nowadays. Yeah, it's free, just download it! Reading licen-what? It's free!

I'd put it clear: I'm not a company, I don't want people to use my software, I let people use it if it's useful to them, and of course I'd like it to be useful.
But if not, you already have the source and can (learn to) modify it at will, or pay some other to do so. Nothing else is given to you, remember:

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Syndicated 2014-10-15 12:33:18 from Ricardo Mones

15 Oct 2014 benad   » (Apprentice)

Moniker, the Security Weak Point

By the time I heard about the "Shellshock bug" security hole in the morning of September 25, already the small Debian Linux server that I built up in Ramnode to host my web site patched that security hole by itself. At any rate, my web site hosts only static pages, so it was never impacted.

While I am in control of the security of my web site starting from the Linux kernel, to the web server, up to the web pages it serves, I'm still dependent on its hosting service (Ramnode) to be secure. Another potential danger would be for someone to hijack the "benad.me" domain name, and make it point to a version of the site filled with malware and viruses. Sadly, this almost happened.

Back in 2008 I registered my domain through the registrar Moniker, which used to be recommended partly based on its security. They implemented additional features that one could pay to "lock" the domain and prevent unauthorized transfers from someone that would steal your user name and password. Since then though, the company was bought by another company, and what is now called Moniker is only by name, both in terms of staffing and software.

I did notice a difference in tone in email communications from the new Moniker. They seemed to be highly focused on domain name auctions, and would automatically auction off expired domains. This felt like a conflict of interest, as Moniker would deride higher profit auctioning off your domain than helping you renewing it. Of course, they would never do that on valuable customers that do "domain speculation" and own a large number of (unused) domains, but still that raises the suspicion that the company was sold based on the number of domains it had and how much money they can extract from large speculators rather than providing valuable customer service.

The "new" Moniker had a security hole in 2013, and to fix that Moniker forced users to change their password the next time they logged in. Note though that this happened with the old version of that web site. This summer, the parent company that bought Moniker (and its name) scrapped the old site's code and replaced it with a new broken, buggy interface. The new interface also brought with it worse security, and made the domain locking feature completely ineffective.

By early October, Moniker sent an email to all its users saying that for untold security reasons, all the account passwords would be reset. The shock was that the email contained both the user names and passwords of all the user's accounts. I was shocked that my old Moniker account, identified by a standard-looking user name, was placed under a parent, numerically-identified user name I've never seen, and another numerical sub-account that was created without my knowledge. It should be noted that I could never access the numerical sub-account, even when using the password provided in the email. Also, the email said that your new passwords must fit security requirements, including the use of at least one "special character", even though the passwords provided in the email didn't contain any special character, and when attempting to change passwords, it would refuse most special characters.

OK, I'm not a security expert, but sending user names and passwords in an email, refusing special characters (which would indicate that they don't use bcrypt), and resetting the passwords of all users may indicate that they were hacked. Badly. Moniker cited the Shellshock bug, but as reports of stolen domains started to appear, a user came forth saying that the security hole predated Shellshock by a month.

So, I was convinced that Moniker had a pattern of behaviour of not taking security seriously, that is until they experience a mass exodus of their customers. I started the process of domain name transfer the day after they announced the password reset, and I would recommend everybody else to do the same. I transferred to Namecheap. Despite its name, in my case the price was the same, though as a test I created a new empty account before the transfer, and already I could attest that they take security seriously, including emails for account activity (using secondary email addresses in rotation) and 2-factor authentication (using SMS for now). I completed the transfer yesterday, so that would explain why there was a little bit of downtime when resolving my domain name.

Syndicated 2014-10-15 01:21:21 from Benad's Blog

14 Oct 2014 crhodes   » (Master)

still working on reproducible builds

It’s been nearly fifteen years, and SBCL still can’t be reliably built by other Lisp compilers.

Of course, other peoples’ definition of “reliably” might differ. We did achieve successful building under unrelated Lisp compilers twelve years ago; there were a couple of nasty bugs along the way, found both before and after that triumphant announcement, but at least with a set of compilers whose interpretation of the standard was sufficiently similar to SBCL’s own, and with certain non-mandated but expected features (such as the type (array (unsigned-byte 8) (*)) being distinct from simple-vector, and single-float being distinct from double-float), SBCL achieved its aim of being buildable on a system without an SBCL binary installed (indeed, using CLISP or XCL as a build host, SBCL could in theory be bootstrapped starting with only gcc).

For true “reliability”, though, we should not be depending on any particular implementation-defined features other than ones we actually require – or if we are, then the presence or absence of them should not cause a visible difference in the resulting SBCL. The most common kind of leak from the host lisp to the SBCL binary was the host’s value of most-positive-fixnum influencing the target, causing problems from documentation errors all the way up to type errors in the assembler. Those leaks were mostly plugged a while ago, though they do recur every so often; there are other problems, and over the last week I spent some time tracking down three of them.

The first: if you’ve ever done (apropos "PRINT") or something similar at the SBCL prompt, you might wonder at the existence of functions named something like SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX (QUOTE (102))) (OP1 (QUOTE (58))) (OP2 (QUOTE (32))) (IMM NIL TYPE (QUOTE IMM-BYTE))) (QUOTE (NAME TAB REG , REG/MEM ...)))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|.

What is going on there? Well, these functions are a part of the disassembler machinery; they are responsible for taking a certain amount of the machine code stream and generating a printed representation of the corresponding assembly: in this case, for the PINSRB instruction. Ah, but (in most instruction sets) related instructions share a fair amount of structure, and decoding and printing a PINSRD instruction is basically the same as for PINSRB, with just one #x20 changed to a #x22 – in both cases we want the name of the instruction, then a tab, then the destination register, a comma, the source, another comma, and the offset in the destination register. So SBCL arranges to reuse the PINSRB instruction printer for PINSRD; it maintains a cache of printer functions, looked up by printer specification, and reuses them when appropriate. So far, so normal; the ugly name above is the generated name for such a function, constructed by interning a printed, string representation of some useful information.

Hm, but wait. See those (QUOTE (58)) fragments inside the name? They result from printing the list (quote (58)). Is there a consensus on how to print that list? Note that *print-pretty* is bound to nil for this printing; prior experience has shown that there are strong divergences between implementations, as well as long-standing individual bugs, in pretty-printer support. So, what happens if I do (write-to-string '(quote foo) :pretty nil)?

  • SBCL: "(QUOTE FOO)", unconditionally
  • CCL: "'FOO" by default; "(QUOTE FOO)" if ccl:*print-abbreviate-quote* is set to nil
  • CLISP: "'FOO", unconditionally (I read the .d code with comments in half-German to establish this)

So, if SBCL was compiled using CLISP, the name of the same function in the final image would be SB-VM::|CACHED-FUN--PINSRB[(EXT-2BYTE-XMM-REG/MEM ((PREFIX '(102)) (OP1 '(58)) (OP2 '(32)) (IMM NIL TYPE 'IMM-BYTE)) '(NAME TAB REG , REG/MEM ...))]-EXT-2BYTE-XMM-REG/MEM-PRINTER|. Which is shorter, and maybe marginally easier to read, but importantly for my purposes is not bitwise-identical.

Thus, here we have a difference between host Common Lisp compilers which leaks over into the final image, and it must be eliminated. Fortunately, this was fairly straightforward to eliminate; those names are never in fact used to find the function object, so generating a unique name for functions based on a counter makes the generated object file bitwise identical, no matter how the implementation prints two-element lists beginning with quote.

The second host leak is also related to quote, and to our old friend backquote – though not related in any way to the new implementation. Consider this apparently innocuous fragment, which is a simplified version of some code to implement the :type option to defstruct:

  (macrolet ((def (name type n)
             `(progn
                (declaim (inline ,name (setf ,name)))
                (defun ,name (thing)
                  (declare (type simple-vector thing))
                  (the ,type (elt thing ,n)))
                (defun (setf ,name) (value thing)
                  (declare (type simple-vector thing))
                  (declare (type ,type value))
                  (setf (elt thing ,n) value)))))
  (def foo fixnum 0)
  (def bar string 1))

What’s the problem here? Well, the functions are declaimed to be inline, so SBCL records their source code. Their source code is generated by a macroexpander, and so is made up of conses that are generated programmatically (as opposed to freshly consed by the reader). That source code is then stored as a literal object in an object file, which means in practice that instructions for reconstructing a similar object are dumped, to be executed when the object file is processed by load.

Backquote is a reader macro that expands into code that, when evaluated, generates list structure with appropriate evaluation and splicing of unquoted fragments. What does this mean in practice? Well, one reasonable implementation of reading `(type ,type value) might be:

  (cons 'type (cons type '(value)))

and indeed you might (no guarantees) see something like that if you do

  (macroexpand '`(type ,type value))

in the implementation of your choice. Similarly, reading `(setf (elt thing ,n) value) will eventually generate code like

  (cons 'setf (cons (cons 'elt (list 'thing n)) '(value)))

Now, what is “similar”? In this context, it has a technical definition: it relates two objects in possibly-unrelated Lisp images, such that they can be considered to be equivalent despite the fact that they can’t be compared:

similar adj. (of two objects) defined to be equivalent under the similarity relationship.

similarity n. a two-place conceptual equivalence predicate, which is independent of the Lisp image so that two objects in different Lisp images can be understood to be equivalent under this predicate. See Section 3.2.4 (Literal Objects in Compiled Files).

Following that link, we discover that similarity for conses is defined in the obvious way:

Two conses, S and C, are similar if the car of S is similar to the car of C, and the cdr of S is similar to the cdr of C.

and also that implementations have some obligations:

Objects containing circular references can be externalizable objects. The file compiler is required to preserve eqlness of substructures within a file.

and some freedom:

With the exception of symbols and packages, any two literal objects in code being processed by the file compiler may be coalesced if and only if they are similar [...]

Put this all together, and what do we have? That def macro above generates code with similar literal objects: there are two instances of '(value) in it. A host compiler may, or may not, choose to coalesce those two literal '(value)s into a single literal object; if it does, the inline expansion of foo (and bar) will have a circular reference, which must be preserved, showing up as a difference in the object files produced during the SBCL build. The fix? It’s ugly, but portable: since we can’t stop an aggressive compiler from coalescing constants which are similar but not identical, we must make sure that any similar substructure is in fact identical:

  (macrolet ((def (name type n)
             (let ((value '(value)))
               `(progn
                  (declaim (inline ,name (setf ,name)))
                  (defun ,name (thing)
                    (declare (type simple-vector thing))
                    (the ,type (elt thing ,n)))
                  (defun (setf ,name) (value thing)
                    (declare (type simple-vector thing))
                    (declare (type ,type . ,value))
                    (setf (elt thing ,n) . ,value)))))
  (def foo fixnum 0)
  (def bar string 1))

Having dealt with a problem with quote, and a problem with backquote, what might the Universe serve up for my third problem? Naturally, it would be a problem with a code walker. This code walker is somewhat naïve, assuming as it does that its body is made up of forms or tags; it is the assemble macro, which is used implicitly in the definition of VOPs (reusable assembly units); for example, like

  (assemble ()
  (move ptr object)
  (zeroize count)
  (inst cmp ptr nil-value)
  (inst jmp :e DONE)
 LOOP
  (loadw ptr ptr cons-cdr-slot list-pointer-lowtag)
  (inst add count (fixnumize 1))
  (inst cmp ptr nil-value)
  (inst jmp :e DONE)
  (%test-lowtag ptr LOOP nil list-pointer-lowtag)
  (error-call vop 'object-not-list-error ptr)
 DONE))

which generates code to compute the length of a list. The expander for assemble scans its body for any atoms, and generates binding forms for those atoms to labels:

  (let ((new-labels (append labels
                          (set-difference visible-labels inherited-labels))))
  ...
  `(let (,@(mapcar (lambda (name) `(,name (gen-label))) new-labels))
     ...))

The problem with this, from a reproducibility point of view, is that set-difference (and the other set-related functions: union, intersection, set-exclusive-or and their n-destructive variants) do not return the sets with a specified order – which is fine when the objects are truly treated as sets, but in this case the LOOP and DONE label objects ended up in different stack locations depending on the order of their binding. Consequently the machine code for the function emitting code for computing a list’s length – though not the machine code emitted by that function – would vary depending on the host’s implementation of set-difference. The fix here was to sort the result of the set operations, knowing that all the labels would be symbols and that they could be treated as string designators.

And after all this is? We’re still not quite there: there are three to four files (out of 330 or so) which are not bitwise-identical for differing host compilers. I hope to be able to rectify this situation in time for SBCL’s 15th birthday...

Syndicated 2014-10-14 06:51:19 from notes

14 Oct 2014 marnanel   » (Journeyer)

today's bit of sexist nonsense

Here's a conversation on Twitter between me and a man I don’t know in China. (FWIW I have a rather androgynous-looking user picture.)

He said, “Is it true that less than half of UK MPs voted for the resolution to recognise Palestine?”
I said, “Yes. But that’s irrelevant to the validity of the vote.”
He said, “Oh, I think it’s the most relevant thing in the world, sweetheart.”
I said, “I can only tell you what the standing orders of the House say. And I don’t appreciate being called ‘sweetheart’.”
He said, “sorry but when I hear a little dumb-dumb girl talking silly things I think of my 8 year old girls.”



This entry was originally posted at http://marnanel.dreamwidth.org/313996.html. Please comment there using OpenID.

Syndicated 2014-10-14 01:47:33 from Monument

13 Oct 2014 mikal   » (Journeyer)

One week of Nova Kilo specifications

Its been one week of specifications for Nova in Kilo. What are we seeing proposed so far? Here's a summary...

API



Administrative

  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705.


Containers Service



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190.


Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (spec approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (spec approved).
  • Support the OVA image format: review 127054.


Hypervisor: libvirt

  • Add a new linuxbridge VIF type, macvtap: review 117465.
  • Add support for SMBFS as a image storage backend: review 103203.
  • Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979.
  • Support libvirt storage pools: review 126978.
  • Support quiesce filesystems during snapshot: review 126966.


Instance features

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.


Interal

  • Move flavor data out of the system_metdata table in the SQL database: review 126620.


Internationalization



Scheduler

  • Add an IOPS weigher: review 127123 (spec approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530.
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Move select_destinations() to using a request object: review 127612.


Scheduling

  • Add instance count on the hypervisor as a weight: review 127871.


Security

  • Provide a reference implementation for console proxies that uses TLS: review 126958.
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Tags for this post: openstack kilo blueprints spec
Related posts: Compute Kilo specs are open; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy

Comment

Syndicated 2014-10-13 03:27:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

13 Oct 2014 mikal   » (Journeyer)

Compute Kilo specs are open

From my email last week on the topic:

I am pleased to announce that the specs process for nova in kilo is
now open. There are some tweaks to the previous process, so please
read this entire email before uploading your spec!

Blueprints approved in Juno
===========================

For specs approved in Juno, there is a fast track approval process for
Kilo. The steps to get your spec re-approved are:

 - Copy your spec from the specs/juno/approved directory to the
specs/kilo/approved directory. Note that if we declared your spec to
be a "partial" implementation in Juno, it might be in the implemented
directory. This was rare however.
 - Update the spec to match the new template
 - Commit, with the "Previously-approved: juno" commit message tag
 - Upload using git review as normal

Reviewers will still do a full review of the spec, we are not offering
a rubber stamp of previously approved specs. However, we are requiring
only one +2 to merge these previously approved specs, so the process
should be a lot faster.

A note for core reviewers here -- please include a short note on why
you're doing a single +2 approval on the spec so future generations
remember why.

Trivial blueprints
==================

We are not requiring specs for trivial blueprints in Kilo. Instead,
create a blueprint in Launchpad
at https://blueprints.launchpad.net/nova/+addspec and target the
specification to Kilo. New, targeted, unapproved specs will be
reviewed in weekly nova meetings. If it is agreed they are indeed
trivial in the meeting, they will be approved.

Other proposals
===============

For other proposals, the process is the same as Juno... Propose a spec
review against the specs/kilo/approved directory and we'll review it
from there.


After a week I'm seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We're therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the "pipeline flush" that we saw in Juno.

So far we have five approved specs after only a week.

Tags for this post: openstack kilo blueprints spec
Related posts: Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

Comment

Syndicated 2014-10-12 16:39:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

12 Oct 2014 bagder   » (Master)

What a removed search from Google looks like

Back in the days when I participated in the starting of the Subversion project, I found the mailing list archive we had really dysfunctional and hard to use, so I set up a separate archive for the benefit of everyone who wanted an alternative way to find Subversion related posts.

This archive is still alive and it recently surpassed 370,000 archived emails, all related to Subversion, for seven different mailing lists.

Today I received a notice from Google (shown in its entirety below) that one of the mails received in 2009 is now apparently removed from a search using a name – if done within the European Union at least. It is hard to take this seriously when you look at the page in question, and as there aren’t that very many names involved in that page the possibilities of which name it is aren’t that many. As there are several different mail archives for Subversion mails I can only assume that the alternative search results also have been removed.

This is the first removal I’ve got for any of the sites and contents I host.


Notice of removal from Google Search

Hello,

Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.

These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.

Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.

If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can’t guarantee responses to submissions to that form.

The following URLs have been affected by this action:

http://svn.haxx.se/users/archive-2009-08/0808.shtml

Regards,

The Google Team

Syndicated 2014-10-12 11:56:12 from daniel.haxx.se

10 Oct 2014 zeenix   » (Journeyer)

Life update

Like many others on planet.gnome, it seems I also don't feel like posting much on my blog any more since I post almost all major events of my life on social media (or SOME, as its for some reason now known as in Finland). To be honest, the thought usually doesn't even occur to me anymore. :( Well, anyway! Here is a brief of what's been up for the last many months:
  • Got divorced. Yeah, not nice at all but life goes on! At least I got to keep my lovely cat.

  • Its been almost an year (14 days less) that I moved to London. In a way it was good that I was in a new city at the time of divorce as its an opportunity to start a new life. I made some cool new friends, mostly the GNOME gang in here.

    London has its quirks but over all I'm pretty happy to be living here. One big issue is that most of my friends are in Finland so I miss them very much. Hopefully, in time I'll also make a lot more friends in London and also my friends from Finland will visit me too.

    The best thing about London is the weather! No, I'm not joking at all. Not only its a big improvement when compared to Helsinki, the rumours about "Its always raining in London" are greatly (I can't stress on this word enough) exaggerated.
  • I got my eyes Z-LASIK'ed so no more glasses!

  • Started taking:

    • Driving lessons. Failed the first driving test today. Having known what I did wrong, I'm sure I wont repeat the same mistakes again next time and will pass.
    • Helicopter flying lessons. Yes! I'm not joking. I grew up watching Airwolf and ever since then I've been fascinated by helicopters and wanted to fly them but never got around to doing it. Its very expensive, as you'd imagine so I'm only taking two lessons a month. With this pace, I should be have my PPL(H) by end of 2015.

      Turns out that I'm very good at one thing that most people find very challenging to master: Hovering. The rest isn't hard either in practice. Theory is the biggest challenge for me. Here is the video recording of the 15 mins trial lesson I started with.

Syndicated 2014-10-10 17:53:00 (Updated 2014-10-10 18:09:31) from zeenix

10 Oct 2014 dmarti   » (Master)

Susceptible to advertising?

Something I hear a lot in discussions of online ad blocking is something like:

Ad blocker users aren't susceptible to advertising anyway.

But advertising isn't a matter of susceptability. It's not fly fishing. Advertising is based on an exchange of attention for signal. The audience pays attention, and the advertiser sends a signal of his or her intentions in the market and belief in product saleability.

Kevin Simler writes, We may not conform to a model of perfect economic behavior, but neither are we puppets at the mercy of every Tom, Dick, and Harry with a billboard. We aren't that easily manipulated.

Ad blocker users aren't the only ones who aren't "susceptible." Nobody is "susceptible." People pay attention to advertising more or less depending on how involved they are in that market, but it's a rational process.

If you go down the road of believing in "susceptible," then you get to the wrong answers. First, advertisers throw away their signaling ability by targeting users likely to click. Then users respond by blocking not just the targeted ads but by over-blocking the remaining signal-carrying ads.

Once you understand how advertising works (you did read that Kevin Simler essay?) you can get to the optimal blocking tool for yourself as a market participant: Privacy Badger, which blocks the ads that it's not rational to look at while letting non-targeted ads, with their signaling value, through.

More on this kind of thing: Targeted Advertising Considered Harmful

Syndicated 2014-10-10 15:43:07 from Don Marti

10 Oct 2014 bagder   » (Master)

internal timers and timeouts of libcurl

wall clockBear with me. It is time to take a deep dive into the libcurl internals and see how it handles timeouts and timers. This is meant as useful information to libcurl users but even more as insights for people who’d like to fiddle with libcurl internals and work on its source code and architecture.

socket activity or timeout

Everything internally in libcurl is using the multi, asynchronous, interface. We avoid blocking calls as far as we can. This means that libcurl always either waits for activity on a socket/file descriptor or for the time to come to do something. If there’s no socket activity and no timeout, there’s nothing to do and it just returns back out.

It is important to remember here that the API for libcurl doesn’t force the user to call it again within or at the specific time and it also allows users to call it again “too soon” if they like. Some users will even busy-loop like crazy and keep hammering the API like a machine-gun and we must deal with that. So, the timeouts are mostly to be considered advisory.

many timeouts

A single transfer can have multiple timeouts. For example one maximum time for the entire transfer, one for the connection phase and perhaps even more timers that handle for example speed caps (that makes libcurl not transfer data faster than a set limit) or detecting transfers speeds below a certain threshold within a given time period.

A single transfer is done with a single easy handle, which holds a list of all its timeouts in a sorted list. It allows libcurl to return a single time left until the nearest timeout expires without having to bother with the remainder of the timeouts (yet).

Curl_expire()

… is the internal function to set a timeout to expire a certain number of milliseconds into the future. It adds a timeout entry to the list of timeouts. Expiring a timeout just means that it’ll signal the application to call libcurl again. Internally we don’t have any identifiers to the timeouts, they’re just a time in the future we ask to be called again at. If the code needs that specific time to really have passed before doing something, the code needs to make sure the time has elapsed.

Curl_expire_latest()

A newcomer in the timeout team. I figured out we need this function since if we are in a state where we need to be called no later than a certain specific future time this is useful. It will not add a new timeout entry in the timeout list in case there’s a timeout that expires earlier than the specified time limit.

This function is useful for example when there’s a state in libcurl that varies over time but has no specific time limit to check for. Like transfer speed limits and the like. If Curl_expire() is used in this situation instead of Curl_expire_latest() it would mean adding a new timeout entry every time, and for the busy-loop API usage cases it could mean adding an excessive amount of timeout entries. (And there was a scary bug reported that got “tens of thousands of entries” which motivated this function to get added.)

timeout removals

We don’t remove timeouts from the list until they expire. Like for example if we have a condition that is timing dependent, then we set a timeout with Curl_expire() and we know we should be called again at the end of that time.

If we wouldn’t add the timeout and there’s no socket activity on the socket then we may not be called again – ever.

When an internal state transition into something else and we therefore don’t need a previously set timeout anymore, we have no handle or identifier to the timeout so it cannot be removed. It will instead lead to us getting called again when the timeout triggers even though we didn’t really need it any longer. As we’re having an API that allows this anyway, this is already handled by the logic and getting called an extra time is usually very cheap and is not considered a problem worth addressing.

Timeouts are removed automatically from the list of timers when they expire. Timeouts that are in passed time are removed from the list and the timers following will then get moved to the front of the queue and be used to calculate how long the single timeout should be next.

The only internal API to remove timeouts that we have removes all timeouts, used when cleaning up a handle.

many easy handles

I’ve mentioned how each easy handle treats their timeouts above. With the multi interface, we can have any amount of easy handles added to a single multi handle. This means one list of timeouts for each easy handle.

To handle many thousands of easy handles added to the same multi handle, all with their own timeout (as each easy handle only show their closest timeout), it builds a splay tree of easy handles sorted on the timeout time. It is a splay tree rather than a sorted list to allow really fast insertions and removals.

As soon as a timeout expires from one of the easy handles and it moves to the next timeout in its list, it means removing one node (easy handle) from the splay tree and inserting it again with the new timeout timer.

Syndicated 2014-10-10 06:29:38 from daniel.haxx.se

9 Oct 2014 amits   » (Journeyer)

KVM Forum 2014 Schedule

The 2014 edition of KVM Forum is less than a week away.  The schedule of the talks is available at this location.  Use this link to add the schedule to your calendar.  A few slides have already been uploaded for some of the talks.

As with last year, we’ll live-stream and record all talks, keep an eye on the wiki page for details.

One notable observation about the schedule is that it’s much relaxed from the last few years, and there are far fewer talks in parallel this time around.  There’s a lot of time for interaction / networking / socializing.  If you’re in Dusseldorf next week, please come by and say ‘hello!’

Syndicated 2014-10-09 19:34:42 (Updated 2014-10-09 19:51:08) from Think. Debate. Innovate.

9 Oct 2014 bagder   » (Master)

Coverity scan defect density: 0.00

A couple of days ago I decided to stop slacking and grab this long dangling item in my TODO list: run the coverity scan on a recent curl build again.

Among the static analyzers, coverity does in fact stand out as the very best one I can use. We run clang-analyzer against curl every night and it hasn’t report any problems at all in a while. This time I got almost 50 new issues reported by Coverity.

To put it shortly, a little less than half of them were issues done on purpose: for example we got several reports on ignored return codes we really don’t care about and there were several reports on dead code for code that are conditionally built on other platforms than the one I used to do this with.

But there were a whole range of legitimate issues. Nothing really major popped up but a range of tiny flaws that were good to polish away and smooth out. Clearly this is an exercise worth repeating every now and then.

End result

21 new curl commits that mention Coverity. Coverity now says “defect density: 0.00” for curl and libcurl since it doesn’t report any more flaws. (That’s the number of flaws found per thousand lines of source code.)

Want to see?

I can’t seem to make all the issues publicly accessible, but if you do want to check them out in person just click over to the curl project page at coverity and “request more access” and I’ll grant you view access, no questions asked.

Syndicated 2014-10-09 07:14:13 from daniel.haxx.se

8 Oct 2014 Stevey   » (Master)

Writing your own e-books is useful

Before our recent trip to Poland I took the time to create my own e-book, containing the names/addresses of people to whom we wanted to send postcards.

Authoring ebooks is simple, and this was a useful use. (Ordinarily I'd have my contacts on my phone, but I deliberately left it at home ..)

I did mean to copy and paste some notes from wikipedia about transport, tourist destinations, etc, into a brief guide. But I forgot.

In other news the toy virtual machine I hacked together got a decent series of updates, allowing you to embed it and add your own custom opcode(s) easily. That was neat, and fell out naturely from the switch to using function-pointers for the opcode implementation.

Syndicated 2014-10-08 19:03:34 from Steve Kemp's Blog

8 Oct 2014 mikal   » (Journeyer)

Lock In




ISBN: 0765375869
LibraryThing
I know I like Scalzi stuff, but each series is so different that I like them all in different ways. I don't think he's written a murder mystery before, and this book was just as good as Old Man's War, which is a pretty high bar. This book revolves around a murder being investigated by someone who can only interact with the real world via personal androids. Its different from anything else I've seen, and a unique idea is pretty rare these days.

Highly recommended.

Tags for this post: book john_scalzi robot murder mystery
Related posts: Isaac Asimov's Robot Short Stories; Prelude To Foundation ; Isaac Asimov's Foundation Series; Caves of Steel; Robots and Empire ; A Talent for War


Comment

Syndicated 2014-10-08 02:43:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

7 Oct 2014 lucasr   » (Master)

Probing with Gradle

Up until now, Probe relied on dynamic view proxies generated at runtime to intercept View calls. Although very convenient, this approach greatly affects the time to inflate your layouts—which limits the number of use cases for the library, especially in more complex apps.

This is all changing now with Probe’s brand new Gradle plugin which seamlessly generates build-time proxies for your app. This means virtually no overhead at runtime!

Using Probe’s Gradle plugin is very simple. First, add the Gradle plugin as a dependency in your build script.

buildscript {
    repositories {
        mavenCentral()
    }

    dependencies {
        classpath 'com.android.tools.build:gradle:0.13.+'
        classpath 'org.lucasr.probe:gradle-plugin:0.1.2'
    }
}

Then add Probe’s library as a dependency in your app.

repositories {
    mavenCentral()
}

dependencies {
    compile 'org.lucasr.probe:probe:0.1.2'
}

Next, apply the plugin to your app’s build.gradle.

apply plugin: 'probe'

Probe’s proxy generation is disabled by default and needs to be explicitly enabled on specific build variants (build type + product flavour). For example, this is how you enable Probe proxies in debug builds.

probe {
    buildVariants {
        debug {
            enabled = true
        }
    }
}

And that’s all! You should now be able to deploy interceptors on any part of your UI. Here’s how you could deploy an OvermeasureInterceptor in an activity.

public final class MainActivity extends Activity {
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       Probe.deploy(this, new OvermeasureInterceptor());
       super.onCreate(savedInstanceState);
       setContentView(R.id.main_activity);
   }
}

While working on this feature, I have changed DexMaker to be an optional dependency i.e. you have to explicitly add DexMaker as a build dependency in your app in order to use it.

This is my first Gradle plugin. There’s definitely a lot of room for improvement here. These features are available in the 0.1.2 release in Maven Central.

As usual, feedback, bug reports, and fixes are very welcome. Enjoy!

Syndicated 2014-10-07 23:12:03 from Lucas Rocha

7 Oct 2014 Pizza   » (Master)

Further adventures with printers: The Citizen CW-01

A few days ago, someone with a Citizen CW-01 popped up on the Gutenprint mailing list. Due to its lineage, I'd assumed it (and its bretheren, the OP900) was related to the newer CW and CY families, and would work with the DS40 backend once the USB PID was known.

It turns out that the printer operates at 334dpi natively, so some additional work was needed. I'm not sure how I'd missed that. So, after some decoding of the WinXP print jobs, I discover the spool format is quite simple, and looks nothing like the newer CX/CY series.

So I ask the user to obtain some sniffs of the printer comms, and he delivered two dumps that look quite similar to the CX/CY, differing only in a couple of parameters.

So, it was pretty easy to whip up a new backend. It's out for testing now, and with luck, in a few days I'll be able to declare the CW-01 as officially supported by Gutenprint, so it'll work under Linux.

It'll be a bit more work to figure out how much of the CX/CY's status/info command set works with the CW-01, and I suspect the 600dpi support needs some more work, but for now, it's out of my hands.

In other news, another Mitsubishi CP-D70DW user popped up, sent me some detailed sniffs, and let me remote into his system for some interactive debugging; many, many bugfixes to the backend later, and it seems to be handle everything I know how to throw at it. With luck it'll also fix the CP-K60DW functionality as well.

Unfortunately, the CP-D70/D707/K60 employ a seriously screwy nonlinear tone curve/smoothing approach that I haven't been able to model, so Gutenprint's output is pretty lousy. Such is the fate of reverse-engineering efforts..

Syndicated 2014-10-07 03:13:17 from Solomon Peachy

6 Oct 2014 crhodes   » (Master)

interesting pretty-printer bug

One of SBCL’s Google Summer of Code students, Krzysztof Drewniak (no relation) just got to merge in his development efforts, giving SBCL a far more complete set of Unicode operations.

Given that this was the merge of three months’ out-of-tree work, it’s not entirely surprising that there were some hiccups, and indeed we spent some time diagnosing and fixing a 1000-fold slowdown in char-downcase. Touch wood, all seems mostly well, except that Jan Moringen reported that, when building without the :sb-unicode feature (and hence having a Lisp with 8-bit characters) one of the printer consistency tests was resulting in an error.

Tracking this down was fun; it in fact had nothing in particular to do with the commit that first showed the symptom, but had been lying latent for a while and had simply never shown up in automated testing. I’ve expressed my admiration for the Common Lisp standard before, and I’ll do it again: both as a user of the language and as an implementor, I think the Common Lisp standard is a well-executed document. But that doesn’t stop it from having problems, and this is a neat one:

When a line break is inserted by any type of conditional newline, any blanks that immediately precede the conditional newline are omitted from the output and indentation is introduced at the beginning of the next line.

(from pprint-newline)

For the graphic standard characters, the character itself is always used for printing in #\ notation---even if the character also has a name[5].

(from CLHS 22.1.3.2)

Space is defined to be graphic.

(from CLHS glossary entry for ‘graphic’)

What do these three requirements together imply? Imagine printing the list (#\a #\b #\c #\Space #\d #\e #\f) with a right-margin of 17:

  (write-to-string '(#\a #\b #\c #\Space #\d #\e #\f) :pretty t :right-margin 17)
; => "(#\\a #\\b #\\c #\\
; #\\d #\\e #\\f)"

The #\Space character is defined to be graphic; therefore, it must print as #\ rather than #\Space; if it happens to be printed just before a conditional newline (such as, for example, generated by using pprint-fill to print a list), the pretty-printer will helpfully remove the space character that has just been printed before inserting the newline. This means that a #\Space character, printed at or near the right margin, will be read back as a #\Newline character.

It’s interesting to see what other implementations do. CLISP 2.49 in its default mode always prints #\Space; in -ansi mode it prints #\ but preserves the space even before a conditional newline. CCL 1.10 similarly preserves the space; there’s an explicit check in output-line-and-setup-for-next for an “escaped” space (and a comment that acknowledges that this is a heuristic that can be wrong in the other direction). I’m not sure what the best fix for this is; it’s fairly clear that the requirements on the printer aren’t totally consistent. For SBCL, I have merged a one-line change that makes the printer print using character names even for graphic characters, if the *print-readably* printer control variable is true; it may not be ideal that print/read round-tripping was broken in the normal case, but in the case where it’s explicitly been asked for it is clearly wrong.

Syndicated 2014-10-06 20:48:14 from notes

6 Oct 2014 crhodes   » (Master)

settings for gnome-shell extension

A long, long time ago, I configured my window manager. What can I say? I was a student, with too much free time; obviously hoursdays spent learning some configuration file format and tweaking some aspect of behaviour would be repaid many times over the course of a working life. I suppose one thing it led to was my current career, so it’s probably not all a loss.

However, the direct productivity benefits almost certainly were a chimera; unfortunately, systems (hardware and software) changed too often for the productivity benefit (if any) to amortize the fixed set-up time, and so as a reaction I stopped configuring my window manager. In fact, I went the other way, becoming extremely conservative about upgrades of anything at all; I had made my peace with GNOME 2, accepting that there was maybe enough configurability and robustness in the 2.8 era or so for me not to have to change too much, and then not changing anything.

Along comes GNOME 3, and there are howls all over the Internet about the lack of friendly behaviour – “friendly” to people like me, who like lots of terminals and lots of editor buffers; there wasn’t much of an outcry from other people with more “normal” computing needs; who knows why? In any case, I stuck with GNOME 2 for a long time, eventually succumbing at the point of an inadvisable apt-get upgrade and not quite hitting the big red ABORT button in time.

So, GNOME 3. I found that I shared a certain amount of frustration with the vocal crowd: dynamic, vertically-arranged workspaces didn’t fit my model; I felt that clicking icons should generate new instances of applications rather than switch to existing instances, and so on. But, in the same timeframe, I adopted a more emacs-centric workflow, and the improvements of the emacs daemon meant that I was less dependent on particular behaviours of window manager and shell, so I gave it another try, and, with the right extensions, it stuck.

The right extensions? “What are those?” I hear you cry. Well, in common with illustrious Debian Project Leaders past, I found that a tiling extension made many of the old focus issues less pressing. My laptop is big enough, and I have enough (fixed) workspaces, that dividing up each screen between applications mostly works. I also have a bottom panel, customized to a height of 0 pixels, purely to give me the fixed number of workspaces; the overview shows them in a vertical arrangement, but the actual workspace arrangement is the 2x4 that I’m used to.

One issue with a tiled window arrangement, though, is an obvious cue to which window has focus. I have also removed all window decorations, so the titlebar or border don’t help with this; instead, a further extension to shade inactive windows helps to minimize visual distraction. And that’s where the technical part of this blog entry starts...

One of the things I do for my employer is deliver a module on Perception and Multimedia Computing. In the course of developing that module, I learnt a lot about how we see what we see, and also how digital displays work. And one of the things I learnt to be more conscious about was attention: in particular, how my attention can be drawn (for example, I can not concentrate on anything where there are animated displays, such as are often present in semi-public spaces, such as bars or London City airport.)

The shade inactive windows extension adds a brightness-reducing effect to windows without focus. So, that was definitely useful, but after a while I noticed that emacs windows with some text in error-face (bold, bright red) in them were still diverting my attention, even when they were unfocussed and therefore substantially dimmed.

So I worked on a patch to the extension to add a saturation-reducing effect in addition to reducing the brightness. And that was all very well – a classic example of taking code that almost does what you want it to do, and then maintenance-programming it into what you really want it to do – until I also found that the hard-wired time over which the effect took hold (300ms) was a bit too long for my taste, and I started investigating what it would take to make these things configurable.

Some time later, after exploring the world with my most finely-crafted google queries, I came to the conclusion that there was in fact no documentation for this at all. The tutorials that I found were clearly out-dated, and there were answers to questions on various forums whose applicability was unclear. This is an attempt to document the approach that worked for me; I make no claims that this is ‘good’ or even acceptable, but maybe there’s some chance that it will amortize the cost of the time I spent flailing about over other people wanting to customize their GNOME shell.

The first thing that something, anything with a preference needs, is a schema for that preference. In this instance, we’re starting with the shade-inactive-windows in the hepaajan.iki.fi namespace, so our schema will have a path that begins “/fi/iki/hepaajan/shade-inactive-windows”, and we're defining preferences, so let’s add “/preferences” to that.

  <?xml version="1.0" encoding="UTF-8"?>
<schemalist>
  <schema path="/fi/iki/hepaajan/shade-inactive-windows/preferences/"

a schema also needs an id, which should probably resemble the path

            id="fi.iki.hepaajan.shade-inactive-windows.preferences"

except that there are different conventions for hierarchy (. vs /).

            gettext-domain="gsettings-desktop-schemas">

and then here’s a cargo-culted gettext thing, which is probably relevant if the rest of the schema will ever be translated into any non-English language.

In this instance, I am interested in a preference that can be used to change the time over which the shading of inactie windows happens. It’s probably easiest to define this as an integer (the "i" here; other GVariant types are possible):

      <key type="i" name="shade-time">

which corresponds to the number of milliseconds

        <summary>Time in milliseconds over which shading occurs</summary>
      <description>
        The time over which the shading effect is applied, in milliseconds.
      </description>

which we will constrain to be between 0 and 1000, so that the actual time is between 0s and 1s, with a default of 0.3s:

        <range min="0" max="1000"/>
      <default>300</default>

and then there's some XML noise

      </key>
  </schema>
</schemalist>

and that completes our schema. For reasons that will become obvious later, we need to store that schema in a directory data/glib-2.0/schemas relative to our base extension directory; giving it a name that corresponds to the schema id (so fi.iki.hepaajan.shade-inactive-windows.preferences.gschema.xml in this case) is wise, but probably not essential. In order for the schema to be useful later, we also need to compile it: that’s as simple as executing glib-compile-schemas . within the schemas directory, which should produce a gschemas.compiled file in the same directory.

Then, we also need to adapt the extension in question to lookup a preference value when needed, rather than hard-coding the default value. I have no mental model of the namespacing, or other aspects of the environment, applied to GNOME shell extensions’ javascript code, so being simultaneously conservative and a javascript novice I added a top-level variable unlikely to collide with anything:

  var ShadeInactiveWindowsSettings = {};
function init() {

The extension previously didn’t need to do anything on init(); now, however, we need to initialize the settings object, including looking up our schema to discover what settings there are. But where is our schema? Well, if we’re running this extension in-place, or as part of a user installation, we want to look in data/glib-2.0/schemas/ relative to our own path; if we have performed a global installation, the schema will presumably be in a path that is already searched for by the default schema finding methods. So...

      var schemaDir = ExtensionUtils.getCurrentExtension().dir.get_child('data').get_child('glib-2.0').get_child('schemas');
    var schemaSource = Gio.SettingsSchemaSource.get_default();

    if(schemaDir.query_exists(null)) {
        schemaSource = Gio.SettingsSchemaSource.new_from_directory(schemaDir.get_path(), schemaSource, false);
    }

... we distinguish between those two cases by checking to see if we can find a data/glib-2.0/schemas/ directory relative to the extension’s own directory; if we can, we prepend that directory to the schema source search path. Then, we lookup our schema using the id we gave it, and initialize a new imports.gi.Gio.Settings object with that schema.

      var schemaObj = schemaSource.lookup('fi.iki.hepaajan.shade-inactive-windows.preferences', true);
    if(!schemaObj) {
        throw new Error('failure to look up schema');
    }
    ShadeInactiveWindowsSettings = new Gio.Settings({ settings_schema: schemaObj });
}

Then, whenever we use the shade time in the extension, we must make sure to look it up afresh:

  var shade_time = ShadeInactiveWindowsSettings.get_int('shade-time') / 1000;

in order that any changes made by the user take effect immediately. And that’s it. There’s an additional minor wrinkle, in that altering that configuration variable is not totally straightforward; dconf and gettings also need to be told where to look for their schema; that’s done using the XDG_DATA_DIRS configuration variable. For example, once the extension is installed locally, you should be able to run

  XDG_DATA_DIRS=$HOME/.local/gnome-shell/extensions/shade-inactive-windows@hepaajan.iki.fi/data:$XDG_DATA_DIRS dconf

and then navigate to the fi/iki/hepaajan/shade-inactive-windows/preferences schema and alter the shade-time preference entry.

Hooray! After doing all of that, we have wrestled things into being configurable: we can use the normal user preferences user interface to change the time over which the shading animation happens. I’m not going to confess how many times I had to restart my gnome session, insert logging code, look at log files that are only readable by root, and otherwise leave my environment; I will merely note that we are quite a long way away from the “scriptable user interface” – and that if you want to do something similar (not identical, but similar) in an all-emacs world, it might be as simple as evaluating these forms in your *scratch* buffer...

  (set-face-attribute 'default nil :background "#eeeeee")
(defvar my/current-buffer-background-overlay nil)

(defun my/lighten-current-buffer-background ()
  (unless (window-minibuffer-p (selected-window))
    (unless my/current-buffer-background-overlay
      (setq my/current-buffer-background-overlay (make-overlay 1 1))
      (overlay-put my/current-buffer-background-overlay
       'face '(:background "white")))
    (overlay-put my/current-buffer-background-overlay 'window
                 (selected-window))
    (move-overlay my/current-buffer-background-overlay
                  (point-min) (point-max))))
(defun my/unlighten-current-buffer-background ()
  (when my/current-buffer-background-overlay
    (delete-overlay my/current-buffer-background-overlay)))

(add-hook 'pre-command-hook #'my/unlighten-current-buffer-background) 
(add-hook 'post-command-hook #'my/lighten-current-buffer-background)

Syndicated 2014-10-06 19:55:13 from notes

6 Oct 2014 shlomif   » (Master)

Emma Watson’s Visit to Israel&Gaza ; “So, Who the Hell is Qoheleth?”

Here are the recent updates for Shlomi Fish’s Homepage.

  1. “Emma Watson’s Visit to Israel and Gaza” is a work-in-progress Real Person fiction screenplay which aims to bring Shalom to the turbulent Gaza Strip/Israel border:

    Waitress: I hope you’re having a good time, ah…

    EmWatson: Emma… Emma Watson!

    Waitress: Oh! I heard about you, naturally. Are you gonna threaten me with a wand? Heh!

    EmWatson: A wand… yes, the bane of my existence. I’m thinking of collecting money for a public campaign to convert the weapon most associated with me to something more menacing.

    Waitress: Don’t you have enough money for that?

    EmWatson: No, not enough! Heh. And money isn’t everything.

    Waitress: So you’re not playing in films for money?

    EmWatson: Playing in films for money? Of course not! What a preposterous idea.

    Waitress: Ah, nice.

    EmWatson: I’m playing in films for a shitload of money!

  2. “So, who the Hell is Qoheleth?” - is a new illustrated screenplay that tells what I imagine to have happened to the author of the Biblical book of Ecclesiastes / Qoheleth shortly after he has written it. The timing is appropriate because Ecclesiastes is being read during the upcoming Sukkot Jewish holiday.

    Josephus: Anyway, can you share some details about your trip? I never ventured a long way past Damascus.

    Athena: Sure! It was very interesting. Most interesting.

    Athena: We travelled with our own people and some Greek merchants, all the way to Athens, and there we hitchhiked a ride with some Assyrian merchants, hoping it will get us closer to Alexandria. There were some guards escorting us, and at one point they disarmed us and threatened us at sword’s point to have sex with them or else they'll kill us and take all our possessions.

    Josephus: Wow! Rape. So what did you do?

    Athena: Well, we consulted between ourselves and after a long while of being really scared, we calmed down a little, and decided that if we are forced to have sex, we might as well cooperate and try to enjoy it. So we told them that we’ll do it willingly and they agreed.

    Josephus: How clever of you! And then what happened.

    Athena: Well, the three of us and her lover each found their own part of the woods, and we had sex. Then, after one or two times, the three men all lost stamina, while we were not completely satisfied and cried for more!

    [ Josephus laughs. ]

    Alexis: Yes! Then we heard each other’s cries and we gathered at one place together still naked with our clothes as cover, and we bitched about the whole situation - in Greek - and the men stood there ashamed.

    Athena: Yes! Anyway, we continued as couples throughout the trip and the men got better in love making as time went by, and they also taught us a little Aramaic. Then we arrived at the junction - they wanted to go to Assyria, and we wanted to head more south, and then all the 6 of us were completely emotional and offered each other to escort them on the way, so we won’t part, but we eventually cared enough about the others to let them go on their own way.

    Josephus: Wow! That sounds like love.

    Athena: Love! Yes! That’s the word. Eros in action.

  3. A new essay A #SummerNSA’s Reading has been added for summarising the concentrated “#SummerNSA” / Summerschool at the NSA effort during the summer of 2014.

  4. There are new factoids in the Facts Collection:

    “Talk Like a Pirate Day” is the only day of the year when Chuck Norris only talks like a pirate, and does not actually act like one.

    On Yom Kippur (= the Jewish Day of Atonement), Chuck Norris forgives God for his sins.

    Chuck Norris once refactored a 10 million lines C++ program and was done by lunch time. It then took Summer Glau 5 minutes to write the equivalent Perl 10-liner.

  5. There are some new captioned images and aphorisms:

    Every mighty Klingon warrior has Watched Sesame Street

    Every Mighty Klingon warrior has watched Sesame Street!

  6. The screenplay Buffy: A Few Good Slayers has some new scenes:

    [ Faith is teaching Becky and the rest of the class how to throw knives. ]

    Faith: Becky, it’s nice that you hit the mark three times in succession, but you’re not always holding the knife correctly.

    Becky: OK, Ms. Harris. Can you show me how to do that again? [She prepares her phone.]

    Faith: OK, here goes.

    [ Cut to the bullseye - three knives hit it quickly. ]

    Faith: How´s that?

    Becky: That’s very nice, but as my mobile‘s video demonstrates, you didn’t hold the knife “correctly” (in quotes) once.

    Faith: Let me see. [She watches the video.] Oh crap.

    Faith: Becky, Becky… you have a lot of potential. You’re more than a pretty face.

    Becky: Heh, I knew that I have potential, but do you really think I have a pretty face?

    Faith: If my opinion as a straight, married, woman, matters, I think you do.

    Becky: Thanks, Ms. Harris.

    Faith: OK, class dismissed. Please try to practise at your free time, we’re going to have a test soon.

    [ The students rise up and leave. ]

Syndicated 2014-10-05 12:35:17 from shlomif

5 Oct 2014 titus   » (Journeyer)

Putting together an online presence for a diffuse academic community - how?

I would like to build a community site. Or, more precisely, I would like to recognize, collect, and collate information from an already existing but rather diffuse community.

The focus of the community will be academic data science, or "data driven discovery". This is spurred largely by the recent selection of the Moore Data Driven Discovery Investigators, as well as the earlier Moore and Sloan Data Science Environments, and more broadly by the recognition that academia is broken when it comes to data science.

So, where to start?

For a variety of reasons -- including the main practical one, that most academics are not terribly social media integrated and we don't want to try to force them to learn new habits -- I am focusing on aggregating blog posts and Twitter.

So, the main question is... how can we most easily collect and broadcast blog posts and articles via a Web site? And how well can we integrate with Twitter?

First steps and initial thoughts

Following Noam Ross's suggestions in the above storify, I put together a WordPress blog that uses the RSS Multi Importer to aggregate RSS feeds as blog posts (hosted on NFSN). I'd like to set this up for the DDD Investigators who have blogs; those who don't can be given accounts if they want to post something. This site also uses a Twitter feed plugin to pull in tweets from the list of DDD Investigators.

The resulting RSS feed from the DDDI can be pulled into a River of News site that aggregates a much larger group of feeds.

The WordPress setup was fairly easy and I'm going to see how stable it is (I assume it will be very stable, but shrug time will tell :). I'm upgrading my own hosting setup and once that's done, I'll try out River4.

Next steps and longer-term thoughts

Ultimately a data-driven-discovery site that has a bit more information would be nice; I could set up a mostly static site, post it on github, authorize a few people to merge, and then solicit pull requests when people want to add their info or feeds.

One thing to make sure we do is track only a portion of feeds for prolific bloggers, so that I, for example, have to tag a post specifically with 'ddd' to make it show up on the group site. This will avoid post overload.

I'd particularly like to get a posting set up that integrates well with how I consume content. In particular, I read a lot of things via my phone and tablet, and the ability to post directly from there -- probably via e-mail? -- would be really handy. Right now I mainly post to Twitter (and largely by RTing) which is too ephemeral, or I post to Facebook, which is a different audience. (Is there a good e-mail-to-RSS feed? Or should I just do this as a WordPress blog with the postie plug-in?)

The same overall setup could potentially work for a Software Carpentry Instructor community site, a Data Carpentry Instructor community site, trainee info sites for SWC/DC trainees, and maybe also a bioinformatics trainee info site. But I'd like to avoid anything that involves a lot of administration.

Things I want to avoid

Public forums.

Private forums that I have to administer or that aren't integrated with my e-mail (which is where I get most notifications, in the end).

Overly centralized solutions; I'm much more comfortable with light moderation ("what feeds do I track?") than anything else.


Thoughts?

--titus

Syndicated 2014-10-04 22:00:00 from Living in an Ivory Basement

5 Oct 2014 Stevey   » (Master)

Before I forget, a simple virtual machine

Before I forget I had meant to write about a toy virtual machine which I'ce been playing with.

It is register-based with ten registers, each of which can hold either a string or int, and there are enough instructions to make it fun to use.

I didn't go overboard and write a complete grammer, or a real compiler, but I did do enough that you can compile and execute obvious programs.

First compile from the source to the bytecodes:

$ ./compiler examples/loop.in

Mmm bytecodes are fun:

$ xxd  ./examples/loop.raw
0000000: 3001 1943 6f75 6e74 696e 6720 6672 6f6d  0..Counting from
0000010: 2074 656e 2074 6f20 7a65 726f 3101 0101   ten to zero1...
0000020: 0a00 0102 0100 2201 0102 0201 1226 0030  ......"......&.0
0000030: 0104 446f 6e65 3101 00                   ..Done1..

Now the compiled program can be executed:

$ ./simple-vm ./examples/loop.raw
[stdout] register R01 = Counting from ten to zero
[stdout] register R01 = 9 [Hex:0009]
[stdout] register R01 = 8 [Hex:0008]
[stdout] register R01 = 7 [Hex:0007]
[stdout] register R01 = 6 [Hex:0006]
[stdout] register R01 = 5 [Hex:0005]
[stdout] register R01 = 4 [Hex:0004]
[stdout] register R01 = 3 [Hex:0003]
[stdout] register R01 = 2 [Hex:0002]
[stdout] register R01 = 1 [Hex:0001]
[stdout] register R01 = 0 [Hex:0000]
[stdout] register R01 = Done

There could be more operations added, but I'm pleased with the general behaviour, and embedding is trivial. The only two things that make this even remotely interesting are:

  • Most toy virtual machines don't cope with labels and jumps. This does.
    • Even though it was a real pain to go patching up the offsets.
    • Having labels be callable before they're defined is pretty mandatory in practice.
  • Most toy virtual machines don't allow integers and strings to be stored in registers.
    • Now I've done that I'm not 100% sure its a good idea.

Anyway that concludes todays computer-fun.

Syndicated 2014-10-05 08:34:30 from Steve Kemp's Blog

5 Oct 2014 amits   » (Journeyer)

OpenStack Pune Meetup

I participated in the OpenStack Meetup at the Red Hat Pune office a few weekends ago.  I have been too caught up on the lower-level KVM/QEMU layers of the virt stack, and know there aren’t too many people involved in those layers in Pune (or even India); and was curious to learn more about OpenStack and also find out more about the OpenStack community in Pune.  The event was on a Saturday, which means sacrificing one day of rest and relaxation – but I went along because curiousity got the better of me.

This was a small, informal event where we had a few talks and several hallway discussions.  Praveen has already blogged about his experiences, here are my notes about the meetup.

There were a few scheduled talks for the day; speakers nominated themselves on the meetup page and the event organizers allotted slots for them.  The proceedings started off with configuring and setting up OpenStack via DevStack.  I wished (for the audience present there) there would’ve been an introductory talk before a deep-dive into DevStack.  I could spot a few newbies in the crowd, and they would have benefitted by an intro.

In a few discussions with the organizers, I learnt one of their pain points for such meetups: there inevitably are newbies at each meetup, and they can’t move on to advanced topics just because they have to start from scratch for each meetup.  I suggested they have a clear focus for each meetup: tell explicitly what each meetup is about, and the expertise level that’s going to be assumed.  For example, there’s nothing wrong with a newbie-focused event; but then some other event could focus on the networking part of OpenStack, and they assume people are familiar with configuring and deploying openstack and are familiar with basic networking priciples.  This suggestion is based on the Pune FADs we want to conduct and have in the pipeline; and was welcomed by the organizers.

Other talks followed; and I noticed a trend: not many people understood, or even knew about, the lower layers that make up the infrastructure beneath OpenStack.  I asked the organizers if they could spare 10 mins for me to provide a peek into the lower levels, and they agreed.  Right after a short working-lunch break, I took the stage.

I spoke about Linux, KVM and QEMU; dove into details of how each of them co-operate and how libvirt drives the interactions between the upper layers and the lower layers.  Also spoke a little about the alternative hypervisor support that libvirt has, but the advantages of the default hypervisor, QEMU/KVM has over others.  I then spoke about how improvements in Linux in general (e.g. the memory management layer) benefits the thousands of people running Linux, the thousands people running the KVM hypervisor, and in effect, benefit all the OpenStack deployments.  I then mentioned a bit about how features flow from upstream into distributions, and how all the advantages trickle down naturally, without anyone having to bother about particular parts of the infrastructure.

The short talk was well received, and judging by the questions I got asked, it was apparent that some people didn’t know the dynamics involved, and the way I presented it was very helpful to them and they wanted to learn more.  I also got asked a few hypervisor comparison questions.  I had to cut the interaction because I easily overflowed the 15 mins allotted to me, and asked people to follow up with me later, which several did.

One of the results of all those conversations was that I got volunteered to do more in-depth talks on the topic at future meetups.  The organizers lamented there’s a dearth of such talks and subject-matter experts; and many meetups generally end up being just talks from people who have read or heard about things rather than real users or implementers of the technology.  They said they would like to have more people from Red Hat talking about the work we do upstream and all the contributions we make.  I’m just glad our contributions are noticed :-)

Another related topic that came up during discussions with the organizers are hackathons, and getting people to contribute and actually do stuff.  I expect a hackathon to be proposed soon.

I had a very interesting conversation with Sajid, one of the organizers.  He mentioned Reliance Jio are setting up data centres across India, and are going to launch cloud computing services with their 4G rollout.  Their entire infrastructure is based on OpenStack.

There were other conversations as well, but I’ll perhaps talk about them in other posts.

Internally at Red Hat, we had a few discussions on how to improve our organization for such events (even though they’re community events; we should be geared up to serve the attendees better).  Mostly included stuff around making it easier to get people in (ie working with security), getting the AV equipment in place, etc.  All of this was working fine during this event, but basically ensuring all of the things that do go right are also part of the list of things to look at while organizing events so we don’t slip up.

Syndicated 2014-10-05 07:09:10 from Think. Debate. Innovate.

4 Oct 2014 marnanel   » (Journeyer)

multipart

Today I received an email from someone who said they'd attached a file I needed, but I couldn't see the attachment. After some digging, I found that the message was structured like this:

multipart/alternative: (i.e. "these are alternative versions of the same thing")
-- text/plain (a version of the message in plain text)
-- multipart/related: (i.e. "these parts belong together")
-- -- text/html (a version of the message in HTML)
-- -- the attachment

So if your email program shows HTML for preference, you would see the attachment, but if it shows plain text for preference (as mine does), you wouldn't. Of course it *should* have been structured like this:

multipart/related: (i.e. "these parts belong together")
-- multipart/alternative: (i.e. "these are alternative versions of the same thing")
-- -- text/plain (a version of the message in plain text)
-- -- text/html (a version of the message in HTML)
-- the attachment

This entry was originally posted at http://marnanel.dreamwidth.org/313678.html. Please comment there using OpenID.

Syndicated 2014-10-04 21:39:27 (Updated 2014-10-04 22:02:50) from Monument

4 Oct 2014 benad   » (Apprentice)

Eventual Consistency, Squared

Looking back at my article "The Syncing Problem", implementing a generic DVCS seems like a relatively straightforward solution. Actually, if the "data to sync" was simplified to plain text, existing DVCS like git or Mercurial may be sufficient. But there is a fundamental problem I glossed over that has huge ramifications on the design of the DVCS that make existing DVCS implementations dangerous to use.

In modern "Internet-connected" appliances, there are two storage solutions: On-device, and "in the cloud". It is the "cloud" storage that is going to be used for the devices to communicate to each other indirectly when performing data synchronization. There is though a huge behavioural difference between on-device and cloud storage: The cloud storage is "eventually consistent". Beneath its API, the cloud storage itself may also be distributed across machines, and modified data can take a little while to propagate to other machines. Essentially, if you upload a file from one device, it may take a little while for another device to see the change.

Sadly, whatever conflict resolution used by a cloud storage provide is unreliable, because their behaviour is either undocumented or inconsistent. Locking files on such storage may not be possible either. Worse, internal synchronization issues at the cloud provider may make their internal synchronization speed so inconsistent to make it unreliable as a means to communicate information between devices quickly. Almost all VCS (distributed or not) assume a reliable storage area for the version repository. Hosted VCS guarantee ACID. No DVCS was made to push revision information to an unreliable storage, and use that as the primary means to exchange information.

The easiest solution for this is to design a DVCS that supports "write-only" repositories. If the storage key (file name) contains the checksum of the data it contains, it may be possible to have multiple clients writing to the same storage area changes to the shared repository. Even if listing available "data blocks" is inconsistent on the shared storage, all it can do is augment the "knowledge" of what exists in the repository against the repository stored on local storage. The atomicity of the storage blocks should be as close as possible to the atomicity of a transactional version control delta, especially since the cloud storage may make information appear on other devices out-of-order of how they were written. That could make those "patch files" larger than well-optimized VCS, but on cloud storage we may not have any other option.

Sure, a write-only repository may be a big issue if the files in the version control are too large or if storage is limited, but then most VCS tend to avoid deleting historical data, and when they support "cleaning up a repository", the solutions are clumsy and error-prone. In our case, if a device prematurely deletes older historical data in the shared storage, by being unaware that other devices were synced at older versions and may branch from there, then this would be tantamount to using the share storage to host only the latest version and nothing else. All to say, deleting historical data in a shared, eventually-consistent storage is a difficult problem that may involve a lot of tuning based on how long devices can stay unsynchronized before considering them "lost", compared to how fast the cloud storage is expected to be consistent.

Syndicated 2014-10-04 15:05:19 from Benad's Blog

4 Oct 2014 Stevey   » (Master)

Kraków was nice

We returned safely from Kraków, despite a somewhat turbulent flight home.

There were many pictures taken, but thus far I've only posted a random night-time shot. Perhaps more will appear in the future.

In other news I've just made a new release of the chronicle blog compiler, So 5.0.7 should shortly appear on CPAN.

The release contains a bunch of minor fixes, and some new facilities relating to templates.

It seems likely that in the future there will be the ability to create "static pages" along with the blog-entries, tag-clouds & etc. The suggestion was raised on the github issue tracker and as a proof of concept I hacked up a solution which works entirely via the chronicle plugin-system, proving that the new development work wasn't a waste of time - especially when combined with the significant speedups in the new codebase.

(ObRandom: Mailed the Debian package-mmaintainer to see if there was interest in changing. Also mailed a couple of people I know who are using the old code to see if they had comments on the new code, or had any compatibility issues. No replies from either, yet. *shrugs*)

Syndicated 2014-10-04 12:20:45 from Steve Kemp's Blog

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9884
Apprentice745
Journeyer2337
Master1029

New Advogato Members

Recently modified projects

20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
10 Jan 2014 libstdc++

New projects

8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction
28 Mar 2013 Snapper
5 Jan 2013 Templer