Recent blog entries

28 Oct 2016 mjg59   » (Master)

Of course smart homes are targets for hackers

The Wirecutter, an in-depth comparative review site for various electrical and electronic devices, just published an opinion piece on whether users should be worried about security issues in IoT devices. The summary: avoid devices that don't require passwords (or don't force you to change a default and devices that want you to disable security, follow general network security best practices but otherwise don't worry - criminals aren't likely to target you.

This is terrible, irresponsible advice. It's true that most users aren't likely to be individually targeted by random criminals, but that's a poor threat model. As I've mentioned before, you need to worry about people with an interest in you. Making purchasing decisions based on the assumption that you'll never end up dating someone with enough knowledge to compromise a cheap IoT device (or even meeting an especially creepy one in a bar) is not safe, and giving advice that doesn't take that into account is a huge disservice to many potentially vulnerable users.

Of course, there's also the larger question raised by the last week's problems. Insecure IoT devices still pose a threat to the wider internet, even if the owner's data isn't at risk. I may not be optimistic about the ease of fixing this problem, but that doesn't mean we should just give up. It is important that we improve the security of devices, and many vendors are just bad at that.

So, here's a few things that should be a minimum when considering an IoT device:
  • Does the vendor publish a security contact? (If not, they don't care about security)
  • Does the vendor provide frequent software updates, even for devices that are several years old? (If not, they don't care about security)
  • Has the vendor ever denied a security issue that turned out to be real? (If so, they care more about PR than security)
  • Is the vendor able to provide the source code to any open source components they use? (If not, they don't know which software is in their own product and so don't care about security, and also they're probably infringing my copyright)
  • Do they mark updates as fixing security bugs? (If not, they care more about hiding security issues than fixing them)
  • Has the vendor ever threatened to prosecute a security researcher? (If so, again, they care more about PR than security)
  • Does the vendor provide a public minimum support period for the device? (If not, they don't care about security or their users)

    I've worked with big name vendors who did a brilliant job here. I've also worked with big name vendors who responded with hostility when I pointed out that they were selling a device with arbitrary remote code execution. Going with brand names is probably a good proxy for many of these requirements, but it's insufficient.

    So here's my recommendations to The Wirecutter - talk to a wide range of security experts about the issues that users should be concerned about, and figure out how to test these things yourself. Don't just ask vendors whether they care about security, ask them what their processes and procedures look like. Look at their history. And don't assume that just because nobody's interested in you, everybody else's level of risk is equal.

  • comment count unavailable comments

    Syndicated 2016-10-28 17:23:34 from Matthew Garrett

    27 Oct 2016 caolan   » (Master)

    Deckard and LibreOffice

    LibreOffice reuses the same ui format that gtk uses. This suggests that deckard could be used to preview translations of them.

    Testing this out shows (as above) that it can be made to work. A few problems though:

    1. We have various placeholder widgets which don't work in deckard because the widgets don't exist in gtk so dialogs that use them can't display as something falls over with e.g. "Invalid object type 'SvSimpleTableContainer'" I had hoped I'd get placeholders by default on failure.
    2. Our .po translation entries for the dialogs strings all have autogenerated msgctxt fields which don't correspond to the blank default of the .ui so the msgctxt fields have to be removed, then msguniq to remove duplicates, and the result can the be run through msgfmt to create a .mo that works with deckard to show web-previews

    Syndicated 2016-10-27 12:44:00 (Updated 2016-10-27 12:44:03) from Caolán McNamara

    27 Oct 2016 glyph   » (Master)

    What Am Container

    Perhaps you are a software developer.

    Perhaps, as a developer, you have recently become familiar with the term "containers".

    Perhaps you have heard containers described as something like "LXC, but better", "an application-level interface to cgroups" or "like virtual machines, but lightweight", or perhaps (even less usefully), a function call. You've probably heard of "docker"; do you wonder whether a container is the same as, different from, or part of an Docker?

    Are you are bewildered by the blisteringly fast-paced world of "containers"? Maybe you have no trouble understanding what they are - in fact you might be familiar with a half a dozen orchestration systems and container runtimes already - but frustrated because this seems like a whole lot of work and you just don't see what the point of it all is?

    If so, this article is for you.

    I'd like to lay out what exactly the point of "containers" are, why people are so excited about them, what makes the ecosystem around them so confusing. Unlike my previous writing on the topic, I'm not going to assume you know anything about the ecosystem in general; just that you have a basic understanding of how UNIX-like operating systems separate processes, files, and networks.1

    At the dawn of time, a computer was a single-tasking machine. Somehow, you'd load your program into main memory, and then you'd turn it on; it would run the program, and (if you're lucky) spit out some output onto paper tape.

    When a program running on such a computer looked around itself, it could "see" the core memory of the computer it was running on, any attached devices, including consoles, printers, teletypes, or (later) networking equipment. This was of course very powerful - the program had full control of everything attached to the computer - but also somewhat limiting.

    This mode of addressing hardware is limiting because it meant that programs would break the instant you moved them to a new computer. They had to be re-written to accommodate new amounts and types of memory, new sizes and brands of storage, new types of networks. If the program had to contain within itself the full knowledge of every piece of hardware that it might ever interact with, it would be very expensive indeed.

    Also, if all the resources of a computer were dedicated to one program, then you couldn't run a second program without stomping all over the first one - crashing it by mangling its structures in memory, deleting its data by overwriting its data on disk.

    So, programmers cleverly devised a way of indirecting, or "virtualizing", access to hardware resources. Instead of a program simply addressing all the memory in the whole computer, it got its own little space where it could address its own memory - an address space, if you will. If a program wanted more memory, it would ask a supervising program - what we today call a "kernel" - to give it some more memory. This made programs much simpler: instead of memorizing the address offsets where a particular machine kept its memory, a program would simply begin by saying "hey operating system, give me some memory", and then it would access the memory in its own little virtual area.

    In other words: memory allocation is just virtual RAM.

    Virtualizing memory - i.e. ephemeral storage - wasn't enough; in order to save and transfer data, programs also had to virtualize disk - i.e. persistent storage. Whereas a whole-computer program would just seek to position 0 on the disk and start writing data to it however it pleased, a program writing to a virtualized disk - or, as we might call it today, a "file" - first needed to request a file from the operating system.

    In other words: file systems are just virtual disks.

    Networking was treated in a similar way. Rather than addressing the entire network connection at once, each program could allocate a little slice of the network - a "port". That way a program could, instead of consuming all network traffic destined for the entire machine, ask the operating system to just deliver it all the traffic for, say, port number seven.

    In other words: listening ports are just virtual network cards.

    Getting bored by all this obvious stuff yet? Good. One of the things that frustrates me the most about containers is that they are an incredibly obvious idea that is just a logical continuation of a trend that all programmers are intimately familiar with.

    All of these different virtual resources exist for the same reason: as I said earlier, if two programs need the same resource to function properly, and they both try to use it without coordinating, they'll both break horribly.2

    UNIX-like operating systems more or less virtualize RAM correctly. When one program grabs some RAM, nobody else - modulo super-powered administrative debugging tools - gets to use it without talking to that program. It's extremely clear which memory belongs to which process. If programs want to use shared memory, there is a very specific, opt-in protocol for doing so; it is basically impossible for it to happen by accident.

    However, the abstractions we use for disks (filesystems) and network cards (listening ports and addresses) are significantly more limited. Every program on the computer sees the same file-system. The program itself, and the data the program stores, both live on the same file-system. Every program on the computer can see the same network information, can query everything about it, and can receive arbitrary connections. Permissions can remove certain parts of the filesystem from view (i.e. programs can opt-out) but it is far less clear which program "owns" certain parts of the filesystem; access must be carefully controlled, and sometimes mediated by administrators.

    In particular, the way that UNIX manages filesystems creates an environment where "installing" a program requires manipulating state in the same place (the filesystem) where other programs might require different state. Popular package managers on UNIX-like systems (APT, RPM, and so on) rarely have a way to separate program installation even by convention, let alone by strict enforcement. If you want to do that, you have to re-compile the software with ./configure --prefix to hard-code a new location. And, fundamentally, this is why the package managers don't support installing to a different place: if the program can tell the difference between different installation locations, then it will, because its developers thought it should go in one place on the file system, and why not hard code it? It works on their machine.

    In order to address this shortcoming of the UNIX process model, the concept of "virtualization" became popular. The idea of virtualization is simple: you write a program which emulates an entire computer, with its own storage media, network devices, and then you install an operating system on it. This completely resolves the over-sharing of resources: a process inside a virtual machine is in a very real sense running on a different computer than programs running on a different virtual machine on the same physical device.

    However, virtualiztion is also an extremly heavy-weight blunt instrument. Since virtual machines are running operating systems designed for physical machines, they have tons of redundant hardware-management code; enormous amounts of operating system data which could be shared with the host, but since it's in the form of a disk image totally managed by the virtual machine's operating system, the host can't really peek inside to optimize anything. It also makes other kinds of intentional resource sharing very hard: any software to manage the host needs to be installed on the host, since if it is installed on the guest it won't have full access to the host's hardware.

    I hate using the term "heavy-weight" when I'm talking about software - it's often bandied about as a content-free criticism - but the difference in overhead between running a virtual machine and a process is the difference between gigabytes and kilobytes; somewhere between 4-6 orders of magnitude. That's a huge difference.

    This means that you need to treat virtual machines as multi-purpose, since one VM is too big to run just a single small program. Which means you often have to manage them almost as if they were physical harware.

    When we run a program on a UNIX-like operating system, and by so running it, grant it its very own address space, we call the entity that we just created a "process".

    This is how to understand a "container".

    A "container" is what we get when we run a program and give it not just its own memory, but its own whole virtual filesystem and its own whole virtual network card.

    The metaphor to processes isn't perfect, because a container can contain multiple processes with different memory spaces that share a single filesystem. But this is also where some of the "container ecosystem" fervor begins to creep in - this is why people interested in containers will religiously exhort you to treat a container as a single application, not to run multiple things inside it, not to SSH into it, and so on. This is because the whole point of containers is that they are lightweight - far closer in overhead to the size of a process than that of a virtual machine.

    A process inside a container, if it queries the operating system, will see a computer where only it is running, where it owns the entire filesystem, and where any mounted disks were explicitly put there by the administrator who ran the container. In other words, if it wants to share data with another application, it has to be given the shared data; opt-in, not opt-out, the same way that memory-sharing is opt-in in a UNIX-like system.

    So why is this so exciting?

    In a sense, it really is just a lower-overhead way to run a virtual machine, as long as it shares the same kernel. That's not super exciting, by itself.

    The reason that containers are more exciting than processes is the same reason that using a filesystem is more exciting than having to use a whole disk: sharing state always, inevitably, leads to brokenness. Opt-in is better than opt-out.

    When you give a program a whole filesystem to itself, sharing any data explicitly, you eliminate even the possibility that some other program scribbling on a shared area of the filesystem might break it. You don't need package managers any more, only package installers; by removing the other functions of package managers (inventory, removal) they can be radically simplified, and less complexity means less brokenness.

    When you give a program an entire network address to itself, exposing any ports explicitly, you eliminate even the possibility that some rogue program will expose a security hole by listening on a port you weren't expecting. You eliminate the possibility that it might clash with other programs on the same host, hard-coding the same port numbers or auto-discovering the same addresses.

    In addition to the exciting things on the run-time side, containers - or rather, the things you run to get containers, "images"3, present some compelling improvements to the build-time side.

    On Linux and Windows, building a software artifact for distribution to end-users can be quite challenging. It's challenging because it's not clear how to specify that you depend on certain other software being installed; it's not clear what to do if you have conflicting versions of that software that may not be the same as the versions already available on the user's computer. It's not clear where to put things on the filesystem. On Linux, this often just means getting all of your software from your operating system distributor.

    You'll notice I said "Linux and Windows"; not the usual (linux, windows, mac) big-3 desktop platforms, and I didn't say anything about mobile OSes. That's because on macOS, Android, iOS, and Windows Metro, applications already run in their own containers. The rules of macOS containers are a bit weird, and very different from Docker containers, but if you have a Mac you can check out ~/Library/Containers to see the view of the world that the applications you're running can see. iOS looks much the same.

    This is something that doesn't get discussed a lot in the container ecosystem, partially because everyone is developing technology at such a breakneck pace, but in many ways Linux server-side containerization is just a continuation of a trend that started on mainframe operating systems in the 1970s and has already been picked up in full force by mobile operating systems.

    When one builds an image, one is building a picture of the entire filesystem that the container will see, so an image is a complete artifact. By contrast, a package for a Linux package manager is just a fragment of a program, leaving out all of its dependencies, to be integrated later. If an image runs on your machine, it will (except in some extremely unusual circumstances) run on the target machine, because everything it needs to run is fully included.

    Because you build all the software an image requires into the image itself, there are some implications for server management. You no longer need to apply security updates to a machine - they get applied to one application at a time, and they get applied as a normal process of deploying new code. Since there's only one update process, which is "delete the old container, run a new one with a new image", updates can roll out much faster, because you can build an image, run tests for the image with the security updates applied, and be confident that it won't break anything. No more scheduling maintenance windows, or managing reboots (at least for security updates to applications and libraries; kernel updates are a different kettle of fish).

    That's why it's exciting. So why's it all so confusing?5

    Fundamentally the confusion is caused by there just being way too many tools. Why so many tools? Once you've accepted that your software should live in images, none of the old tools work any more. Almost every administrative, monitoring, or management tool for UNIX-like OSes depends intimately upon the ability to promiscuously share the entire filesystem with every other program running on it. Containers break these assumptions, and so new tools need to be built. Nobody really agrees on how those tools should work, and a wide variety of forces ranging from competitive pressure to personality conflicts make it difficult for the panoply of container vendors to collaborate perfectly4.

    Many companies whose core business has nothing to do with infrastructure have gone through this reasoning process:

    1. Containers are so much better than processes, we need to start using them right away, even if there's some tooling pain in adopting them.
    2. The old tools don't work.
    3. The new tools from the tool vendors aren't ready.
    4. The new tools from the community don't work for our use-case.
    5. Time to write our own tool, just for our use-case and nobody else's! (Which causes problem #3 for somebody else, of course...)

    A less fundamental reason is too much focus on scale. If you're running a small-scale web application which has a stable user-base that you don't expect a lot of growth in, there are many great reasons to adopt containers as opposed to automating your operations; and in fact, if you keep things simple, the very fact that your software runs in a container might obviate the need for a system-management solution like Chef, Ansible, Puppet, or Salt. You should totally adopt them and try to ignore the more complex and involved parts of running an orchestration system.

    However, containers are even more useful at significant scale, which means that companies which have significant scaling problems invest in containers heavily and write about them prolifically. Many guides and tutorials on containers assume that you expect to be running a multi-million-node cluster with fully automated continuous deployment, blue-green zero-downtime deploys, a 1000-person operations team. It's great if you've got all that stuff, but building each of those components is a non-trivial investment.

    So, where does that leave you, my dear reader?

    You should absolutely be adopting "container technology", which is to say, you should probably at least be using Docker to build your software. But there are other, radically different container systems - like Sandstorm - which might make sense for you, depending on what kind of services you create. And of course there's a huge ecosystem of other tools you might want to use; too many to mention, although I will shout out to my own employer's docker-as-a-service Carina, which delivered this blog post, among other things, to you.

    You shouldn't feel as though you need to do containers absolutely "the right way", or that the value of containerization is derived from adopting every single tool that you can all at once. The value of containers comes from four very simple things:

    1. It reduces the overhead and increases the performance of co-locating multiple applications on the same hardware,
    2. It forces you to explicitly call out any shared state or required resources,
    3. It creates a complete build pipeline that results in a software artifact that can be run without special installation or set-up instructions (at least, on the "software installation" side; you still might require configuration, of course), and
    4. It gives you a way to test exactly what you're deploying.

    These benefits can combine and interact in surprising and interesting ways, and can be enhanced with a wide and growing variety of tools. But underneath all the hype and the buzz, the very real benefit of containerization is basically just that it is fixing a very old design flaw in UNIX.

    Containers let you share less state, and shared mutable state is the root of all evil.

    1. If you have a more sophisticated understanding of memory, disks, and networks, you'll notice that everything I'm saying here is patently false, and betrays an overly simplistic understanding of the development of UNIX and the complexities of physical hardware and driver software. Please believe that I know this; this is an alternate history of the version of UNIX that was developed on platonically ideal hardware. The messy co-evolution of UNIX, preemptive multitasking, hardware offload for networks, magnetic secondary storage, and so on, is far too large to fit into the margins of this post. 

    2. When programs break horribly like this, it's called "multithreading". I have written some software to help you avoid it. 

    3. One runs an "executable" to get a process; one runs an "image" to get a container. 

    4. Although the container ecosystem is famously acrimonious, companies in it do actually collaborate better than the tech press sometimes give them credit for; the Open Container Project is a significant extraction of common technology from multiple vendors, many of whom are also competitors, to facilitate a technical substrate that is best for the community. 

    5. If it doesn't seem confusing to you, consider this absolute gem from the hilarious folks over at CircleCI. 

    Syndicated 2016-10-27 09:23:00 from Deciphering Glyph

    26 Oct 2016 marnanel   » (Journeyer)


    One fine dark night with a fine dark sky
    And fine-sliced moon so bright,
    A Cat leapt forth with a fine black coat
    And paws of moonlit white;
    If I should ask you to say her name
    I'm sure you'd tell me that
    She's Yantantessera,
    Tessera, Tessera,
    Tessera Tessera, Cat.

    She had no humans, she had no home,
    She had no meals to eat,
    But soon, by means of a friendly purr,
    Adopted half a street,
    Where twenty humans would serve her food:
    They all had time to chat
    With Yantantessera,
    Tessera, Tessera,
    Tessera Tessera, Cat.

    The Cats' Home heard, and they swore to find
    The Cat a Home, and thus
    She started work as a Rescue Cat
    Who came to rescue us.
    And since that day, we belong to her;
    We're proud to share a flat
    With Yantantessera,
    Tessera, Tessera,
    Tessera Tessera, Cat!

    This entry was originally posted at Please comment there using OpenID.

    Syndicated 2016-10-26 16:35:43 from Monument

    25 Oct 2016 ade   » (Journeyer)

    Starting a new PWA directory

    A few of us in Google's Developer Relations group are building a little demo: It's open source and code named Gulliver because it's a PWA directory in the spirit of Yahoo or DMOZ .
    That means it's not a curated gallery. Instead we recommend that people who just want to see a set of exemplary PWAs should go to the PWA directory.
    If there's already another PWA directory why are we doing this and what do we hope to achieve?
    Our primary goal is to learn in the open and share those lessons. Some of the things we hope to learn include:
    • what makes people use a PWA offline?
    • what constitutes a meaningful offline experience?
    • what percentage of our userbase actually uses it offline?
    • which PWA technologies help with acquisition, engagement, retention and re-engagement of users?
    • how do we build a good cross-platform and cross-browser experience?
    • what signals in analytics and Search Console indicate that we are on the right path?
    • what are the things we believe or assume that are wrong?
    We hope to get 1000 30DAU of this content-centric (lots of pages with URLs) PWA over the next few months. That level of regular usage should start to surface some of the challenges that big web apps face.
    However this isn't a big web app so our stack is relatively simple.
    The one clever technical feature is that we use Lighthouse As A Service. That means that every time someone submits a manifest (all we require is that the site provide a web manifest over HTTPS) we run Lighthouse inside a headless Chromium instance to collect metrics about the quality of the prospective PWA. If you’re already a Lighthouse user then you may spot that our scores sometimes differ from those you see in Lighthouse. It’s an open issue and we’re working on it.
    Lighthouse is a big part of this web app’s value so it's going to be the subject of the first in a series of articles sharing the lessons we are learning in building Gulliver. Until then you can get in touch with us via Github if you have questions or feature requests.

    Syndicated 2016-10-25 16:44:00 (Updated 2016-10-25 16:44:10) from Ade Oshineye

    24 Oct 2016 sye   » (Journeyer)

    Writing Verse in Classic Chinese

    This poem of mine came to me through 'conversational' writing in March 2005 between myself and an unknown netizen who served in Iraqi war.

    Writing Verse in Classic Chinese
    I was smashed by a pale spirit
    out of Bath a misty winter morn
    it haunts hours away
    from my X-Window frames.
    Alas by this end of hours
    and by this end of day
    oh How i cut and paste
    in every way
    to restore a dream I've dreamt
    yet lost ever since I left.
    O blessed is the hours so ardently spent
    waiting by wings of angelic forms
    whereby beauty and grace behold
    a forgotten tongue
    a rusty art
    of making ancient music
    with silent sound
    echoing between thousands of years
    and thousands of miles around.
    [fiveshinylights: comrade, dost you speaketh the china man's tongue?]

    Brother 'cool hand luke', yes i do.
    now as i remember my mother tongue
    endured warring kingdoms hundreds of centuries strong
    it is not to speak
    but to paint, silently
    out of gentle brushes rainfalls of firm strokes
    thy worst enemy and thyself's speck of mind
    for words uttered always turning into querrels by hauling wind
    for words carved into oracles smoked by time and shifted by sand
    can mean many things to one eye kings
    thus bind thy people with one faith
    no words no names
    but mixing bloody drops from unquenchable dreams
    syndicated from

    Syndicated 2016-10-24 15:03:00 (Updated 2016-10-24 15:03:14) from badvogato

    22 Oct 2016 glyph   » (Master)

    docker run glyph/rproxy

    Want to TLS-protect your co-located stack of vanity websites with Twisted and Let's Encrypt using HawkOwl's rproxy, but can't tolerate the bone-grinding tedium of a pip install? I built a docker image for you now, so it's now as simple as:

    $ mkdir -p conf/certificates;
    $ cat > conf/rproxy.ini << EOF;
    > [rproxy]
    > certificates=certificates
    > http_ports=80
    > https_ports=443
    > [hosts]
    > mysite.com_host=<other container host>
    > mysite.com_port=8080
    > EOF
    $ docker run --restart=always -v "$(pwd)"/conf:/conf \
        -p 80:80 -p 443:443 \

    There are no docs to speak of, so if you're interested in the details, see the tree on github I built it from.

    Modulo some handwaving about docker networking to get that <other container host> IP, that's pretty much it. Go forth and do likewise!

    Syndicated 2016-10-22 20:12:00 from Deciphering Glyph

    22 Oct 2016 mjg59   » (Master)

    Fixing the IoT isn't going to be easy

    A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

    To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

    We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

    Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

    These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

    Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

    Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

    If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

    We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

    Right. I'm off to portscan another smart socket.

    [1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

    [2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

    [3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

    comment count unavailable comments

    Syndicated 2016-10-22 05:14:28 from Matthew Garrett

    21 Oct 2016 badvogato   » (Master)

    “Auguries of Innocence.”
    William Blake (1757-1827):

    To see a world in a grain of sand
    And a heaven in a wild flower,
    Hold infinity in the palm of your hand
    And eternity in an hour.

    pdfimages -list "reframing catalog.pdf"

    Cherney's work let you feel that you are the one doing the flying ...

    I dunno. Am I flying face down or up?

    21 Oct 2016 caolan   » (Master)

    Office Binary Document RC4 CryptoAPI Encryption

    In LibreOffice we've long supported Microsoft Office's "Office Binary Document RC4 Encryption" for decrypting xls, doc and ppt. But somewhere along the line the Microsoft Office encryption scheme was replaced by a new one, "Office Binary Document RC4 CryptoAPI Encryption", which we didn't support. This is what the error dialog of...

    "The encryption method used in this document is not supported. Only Microsoft Office 97/2000 compatible password encryption is supported."

    ...from LibreOffice is telling you when you open, for example, an encrypted xls saved by a contemporary Microsoft Excel version.

    I got the newer scheme working this morning for xls, so from LibreOffice 5-3 onwards (I may backport to upstream 5-2 and Fedora 5-1) these variants can be successfully decrypted and viewed in LibreOffice.

    Syndicated 2016-10-21 10:35:00 (Updated 2016-10-21 10:35:56) from Caolán McNamara

    20 Oct 2016 sye   » (Journeyer)

    submit my resume to lulu's current job post for voice associate ( replay photo ...) after I rediscovered my projects with them in June 2006 are still there.... also changed my public profile address to-->

    19 Oct 2016 marnanel   » (Journeyer)

    Please, talk about masturbation

    [I commented this in a discussion about the "birds and the bees" talk. I think it's worth posting separately.] Please, talk about masturbation too, and don't wait until puberty. Here's a (very personal) story I've never told in full before. I discovered masturbation when I was about ten, before I started puberty. Nobody had talked about it, so I didn't know it was normal; I didn't even know there was a word for it. So I worried. About a year later I started puberty and of course I became able to ejaculate. And again, nobody had talked about that. They'd mentioned wet dreams, but never this. So I didn't know it was normal, and I worried. A few months later, I got what I now think was some kind of fungal skin infection. The skin where my pubic hair would soon be growing was alternately red and painful, or dry, cracked, and itchy. For all I knew, this was another weird side-effect of masturbation, like ejaculation. And since nobody had talked about the other stuff, I wasn't comfortable with asking anyone about it. So I put up with the discomfort for months. Even after my pubic hair grew, the rash was still visible and I remember deflecting questions in the changing-rooms after games lessons about whether it was a scar from an operation. All that worry and discomfort could have been avoided. Please, remember to talk about it. This entry was originally posted at Please comment there using OpenID.

    Syndicated 2016-10-19 18:15:13 (Updated 2016-10-19 18:16:14) from Monument

    19 Oct 2016 amits   » (Journeyer)

    Ten Years of KVM

    We recently celebrated 25 years of Linux on the 25th anniversary of the famous email Linus sent to announce the start of the Linux project.  Going by the same yardstick, today marks the 10th anniversary of the KVM project — Avi Kivity first announced the project on the 19th Oct, 2006 by this posting on LKML:

    The first patchset added support for hardware virtualization on Linux for the Intel CPUs.  Support for AMD CPUs followed soon:

    KVM was subsequently merged in the upstream kernel on the 10th December 2006 (commit 6aa8b732ca01c3d7a54e93f4d701b8aabbe60fb7).  Linux 2.6.20, released on 4 Feb 2007 was the first kernel release to include KVM.

    KVM has come a long way in these 10 years.  I’m writing a detailed post about some of the history of the KVM project — stay tuned for that.

    Till then, cheers!

    Syndicated 2016-10-19 16:35:17 from Think. Debate. Innovate.

    19 Oct 2016 glyph   » (Master)

    docker run glyph/rproxy

    Want to TLS-protect your co-located stack of vanity websites with Twisted and Let's Encrypt using HawkOwl's rproxy, but can't tolerate the bone-grinding tedium of a pip install? I built a docker image for you now, so it's now as simple as:

    $ mkdir -p conf/certificates;
    $ cat > conf/rproxy.ini << EOF;
    > [rproxy]
    > certs=certificates
    > http_ports=80
    > https_ports=443
    > [hosts]
    > mysite.com_host=<other container host>
    > mysite.com_port=8080
    > EOF
    $ docker run --restart=always -v "$(pwd)"/conf:/conf \
        -p 80:80 -p 443:443 \

    There are no docs to speak of, so if you're interested in the details, see the tree on github I built it from.

    Modulo some handwaving about docker networking to get that <other container host> IP, that's pretty much it. Go forth and do likewise!

    Syndicated 2016-10-19 00:32:00 from Deciphering Glyph

    18 Oct 2016 glyph   » (Master)

    As some of you may have guessed from the unintentional recent flurry of activity on my Twitter account, twitter feed, the service I used to use to post blog links automatically, is getting end-of-lifed. I've switched to for the time being, unless they send another unsolicited tweetstorm out on my behalf...

    Sorry about the noise! In the interests of putting some actual content here, maybe you would be interested to know that I was recently interviewed for PyDev of the Week?

    Syndicated 2016-10-18 20:37:00 from Deciphering Glyph

    18 Oct 2016 Pizza   » (Master)

    Call for testing on Mitsubishi printers

    In the past I've written about the particularly poor level of support for Mitsubishi printers under Linux. In the past couple of months, that has changed substantially, although not due to any action or assistance on Mitsubishi's part.

    Gutenprint 5.2.11 had usable support for the CP-9550DW and CP-9550DW-S models, including an intelligent backend that handled the printer communications. However, the rest of the CP-9xxx family wasn't supported.

    The 5.2.12 release of Gutenprint will support most of the CP-9xxx family. This includes a considerable amount of work on the backend, genericizing it so that the other models can be cleanly handled. This was a rather disruptive change, so it's possible the formerly-working CP-9550 family has regressed. Beyond that, the newly-supported models need testing to confirm everything functions as expected.

    Here is the list of all models affected by this development:

    • CP-9000DW
    • CP-9500DW
    • CP-9550DW (previously working, needs retesting)
    • CP-9550DW-S (previously working, needs retesting)
    • CP-9600DW
    • CP-9600DW-S
    • P-9800DW
    • CP-9800DW-S (confirmed working!)
    • CP-9810DW
    • CP-9820DW-S

    Also, I still need testers for the following models:

    • DNP DS80DX
    • Sony UP-CR10L (aka DNP DS-SL10)
    • Mitsubishi CP-D70DW, CP-D707DW, CP-D80DW, and CP-D90DW (plus their -S variants!)
    • Fujifilm ASK-300
    • Sinfonia CHC-S1245/E1 and CHC-S6245/CE1
    • Kodak 7000, 7010, 7015, and 8810

    If anyone reading this has access to one or more of these printers, please drop me a line!

    Syndicated 2016-10-18 15:23:19 from I Dream of Rain (free_software)

    18 Oct 2016 slef   » (Master)

    Rinse and repeat

    Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

    The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

    Go in peace to love and serve the web. 🙂

    Syndicated 2016-10-18 04:28:23 from mjr – Software Cooperative News

    14 Oct 2016 philiph   » (Journeyer)

    Talking To Children About Trump

    New page

    Syndicated 2016-10-14 19:52:04 from HollenbackDotNet

    14 Oct 2016 marnanel   » (Journeyer)

    fan art: Dream meets Elemental

    fan art: Dream of the Endless meets Professor Elemental This entry was originally posted at Please comment there using OpenID.

    Syndicated 2016-10-14 15:30:24 from Monument

    12 Oct 2016 wingo   » (Master)

    An incomplete history of language facilities for concurrency

    I have lately been in the market for better concurrency facilities in Guile. I want to be able to write network servers and peers that can gracefully, elegantly, and efficiently handle many tens of thousands of clients and other connections, but without blowing the complexity budget. It's a hard nut to crack.

    Part of the problem is implementation, but a large part is just figuring out what to do. I have often thought that modern musicians must be crushed under the weight of recorded music history, but it turns out in our humble field that's also the case; there are as many concurrency designs as languages, just about. In this regard, what follows is an incomplete, nuanced, somewhat opinionated history of concurrency facilities in programming languages, with an eye towards what I should "buy" for the Fibers library I have been tinkering on for Guile.

    * * *

    Modern machines have the raw capability to serve hundreds of thousands of simultaneous long-lived connections, but it’s often hard to manage this at the software level. Fibers tries to solve this problem in a nice way. Before discussing the approach taken in Fibers, it’s worth spending some time on history to see how we got here.

    One of the most dominant patterns for concurrency these days is “callbacks”, notably in the Twisted library for Python and the Node.js run-time for JavaScript. The basic observation in the callback approach to concurrency is that the efficient way to handle tens of thousands of connections at once is with low-level operating system facilities like poll or epoll. You add all of the file descriptors that you are interested in to a “poll set” and then ask the operating system which ones are readable or writable, as appropriate. Once the operating system says “yes, file descriptor 7145 is readable”, you can do something with that socket; but what? With callbacks, the answer is “call a user-supplied closure”: a callback, representing the continuation of the computation on that socket.

    Building a network service with a callback-oriented concurrency system means breaking the program into little chunks that can run without blocking. Whereever a program could block, instead of just continuing the program, you register a callback. Unfortunately this requirement permeates the program, from top to bottom: you always pay the mental cost of inverting your program’s control flow by turning it into callbacks, and you always incur run-time cost of closure creation, even when the particular I/O could proceed without blocking. It’s a somewhat galling requirement, given that this contortion is required of the programmer, but could be done by the compiler. We Schemers demand better abstractions than manual, obligatory continuation-passing-style conversion.

    Callback-based systems also encourage unstructured concurrency, as in practice callbacks are not the only path for data and control flow in a system: usually there is mutable global state as well. Without strong patterns and conventions, callback-based systems often exhibit bugs caused by concurrent reads and writes to global state.

    Some of the problems of callbacks can be mitigated by using “promises” or other library-level abstractions; if you’re a Haskell person, you can think of this as lifting all possibly-blocking operations into a monad. If you’re not a Haskeller, that’s cool, neither am I! But if your typey spidey senses are tingling, it’s for good reason: with promises, your whole program has to be transformed to return promises-for-values instead of values anywhere it would block.

    An obvious solution to the control-flow problem of callbacks is to use threads. In the most generic sense, a thread is a language feature which denotes an independent computation. Threads are created by other threads, but fork off and run independently instead of returning to their caller. In a system with threads, there is implicitly a scheduler somewhere that multiplexes the threads so that when one suspends, another can run.

    In practice, the concept of threads is often conflated with a particular implementation, kernel threads. Kernel threads are very low-level abstractions that are provided by the operating system. The nice thing about kernel threads is that they can use any CPU that is the kernel knows about. That’s an important factor in today’s computing landscape, where Moore’s law seems to be giving us more cores instead of more gigahertz.

    However, as a building block for a highly concurrent system, kernel threads have a few important problems.

    One is that kernel threads simply aren’t designed to be allocated in huge numbers, and instead are more optimized to run in a one-per-CPU-core fashion. Their memory usage is relatively high for what should be a lightweight abstraction: some 10 kilobytes at least and often some megabytes, in the form of the thread’s stack. There are ongoing efforts to reduce this for some systems but we cannot expect wide deployment in the next 5 years, if ever. Even in the best case, a hundred thousand kernel threads will take at least a gigabyte of memory, which seems a bit excessive for book-keeping overhead.

    Kernel threads can be a bit irritating to schedule, too: when one thread suspends, it’s for a reason, and it can be that user-space knows a good next thread that should run. However because kernel threads are scheduled in the kernel, it’s rarely possible for the kernel to make informed decisions. There are some “user-mode scheduling” facilities that are in development for some systems, but again only for some systems.

    The other significant problem is that building non-crashy systems on top of kernel threads is hard to do, not to mention “correct” systems. It’s an embarrassing situation. For one thing, the low-level synchronization primitives that are typically provided with kernel threads, mutexes and condition variables, are not composable. Also, as with callback-oriented concurrency, one thread can silently corrupt another via unstructured mutation of shared state. It’s worse with kernel threads, though: a kernel thread can be interrupted at any point, not just at I/O. And though callback-oriented systems can theoretically operate on multiple CPUs at once, in practice they don’t. This restriction is sometimes touted as a benefit by proponents of callback-oriented systems, because in such a system, the callback invocations have a single, sequential order. With multiple CPUs, this is not the case, as multiple threads can run at the same time, in parallel.

    Kernel threads can work. The Java virtual machine does at least manage to prevent low-level memory corruption and to do so with high performance, but still, even Java-based systems that aim for maximum concurrency avoid using a thread per connection because threads use too much memory.

    In this context it’s no wonder that there’s a third strain of concurrency: shared-nothing message-passing systems like Erlang. Erlang isolates each thread (called processes in the Erlang world), giving each it its own heap and “mailbox”. Processes can spawn other processes, and the concurrency primitive is message-passing. A process that tries receive a message from an empty mailbox will “block”, from its perspective. In the meantime the system will run other processes. Message sends never block, oddly; instead, sending to a process with many messages pending makes it more likely that Erlang will pre-empt the sending process. It’s a strange tradeoff, but it makes sense when you realize that Erlang was designed for network transparency: the same message send/receive interface can be used to send messages to processes on remote machines as well.

    No network is truly transparent, however. At the most basic level, the performance of network sends should be much slower than local sends. Whereas a message sent to a remote process has to be written out byte-by-byte over the network, there is no need to copy immutable data within the same address space. The complexity of a remote message send is O(n) in the size of the message, whereas a local immutable send is O(1). This suggests that hiding the different complexities behind one operator is the wrong thing to do. And indeed, given byte read and write operators over sockets, it’s possible to implement remote message send and receive as a process that serializes and parses messages between a channel and a byte sink or source. In this way we get cheap local channels, and network shims are under the programmer’s control. This is the approach that the Go language takes, and is the one we use in Fibers.

    Structuring a concurrent program as separate threads that communicate over channels is an old idea that goes back to Tony Hoare’s work on “Communicating Sequential Processes” (CSP). CSP is an elegant tower of mathematical abstraction whose layers form a pattern language for building concurrent systems that you can still reason about. Interestingly, it does so without any concept of time at all, instead representing a thread’s behavior as a trace of instantaneous events. Threads themselves are like functions that unfold over the possible events to produce the actual event trace seen at run-time.

    This view of events as instantaneous happenings extends to communication as well. In CSP, one communication between two threads is modelled as an instantaneous event, partitioning the traces of the two threads into “before” and “after” segments.

    Practically speaking, this has ramifications in the Go language, which was heavily inspired by CSP. You might think that a channel is just a an asynchronous queue that blocks when writing to a full queue, or when reading from an empty queue. That’s a bit closer to the Erlang conception of how things should work, though as we mentioned, Erlang simply slows down writes to full mailboxes rather than blocking them entirely. However, that’s not what Go and other systems in the CSP family do; sending a message on a channel will block until there is a receiver available, and vice versa. The threads are said to “rendezvous” at the event.

    Unbuffered channels have the interesting property that you can select between sending a message on channel a or channel b, and in the end only one message will be sent; nothing happens until there is a receiver ready to take the message. In this way messages are really owned by threads and never by the channels themselves. You can of course add buffering if you like, simply by making a thread that waits on either sends or receives on a channel, and which buffers sends and makes them available to receives. It’s also possible to add explicit support for buffered channels, as Go, core.async, and many other systems do, which can reduce the number of context switches as there is no explicit buffer thread.

    Whether to buffer or not to buffer is a tricky choice. It’s possible to implement singly-buffered channels in a system like Erlang via an explicit send/acknowlege protocol, though it seems difficult to implement completely unbuffered channels. As we mentioned, it’s possible to add buffering to an unbuffered system by the introduction of explicit buffer threads. In the end though in Fibers we follow CSP’s lead so that we can implement the nice select behavior that we mentioned above.

    As a final point, select is OK but is not a great language abstraction. Say you call a function and it returns some kind of asynchronous result which you then have to select on. It could return this result as a channel, and that would be fine: you can add that channel to the other channels in your select set and you are good. However, what if what the function does is receive a message on a channel, then do something with the message? In that case the function should return a channel, plus a continuation (as a closure or something). If select results in a message being received over that channel, then we call the continuation on the message. Fine. But, what if the function itself wanted to select over some channels? It could return multiple channels and continuations, but that becomes unwieldy.

    What we need is an abstraction over asynchronous operations, and that is the main idea of a CSP-derived system called “Concurrent ML” (CML). Originally implemented as a library on top of Standard ML of New Jersey by John Reppy, CML provides this abstraction, which in Fibers is called an operation1. Calling send-operation on a channel returns an operation, which is just a value. Operations are like closures in a way; a closure wraps up code in its environment, which can be later called many times or not at all. Operations likewise can be performed2 many times or not at all; performing an operation is like calling a function. The interesting part is that you can compose operations via the wrap-operation and choice-operation combinators. The former lets you bundle up an operation and a continuation. The latter lets you construct an operation that chooses over a number of operations. Calling perform-operation on a choice operation will perform one and only one of the choices. Performing an operation will call its wrap-operation continuation on the resulting values.

    While it’s possible to implement Concurrent ML in terms of Go’s channels and baked-in select statement, it’s more expressive to do it the other way around, as that also lets us implement other operations types besides channel send and receive, for example timeouts and condition variables.

    1 CML uses the term event, but I find this to be a confusing name. In this isolated article my terminology probably looks confusing, but in the context of the library I think it can be OK. The jury is out though.

    2 In CML, synchronized.

    * * *

    Well, that's my limited understanding of the crushing weight of history. Note that part of this article is now in the Fibers manual.

    Thanks very much to Matthew Flatt, Matthias Felleisen, and Michael Sperber for pushing me towards CML. In the beginning I thought its benefits were small and complication large, but now I see it as being the reverse. Happy hacking :)

    Syndicated 2016-10-12 13:45:12 from wingolog

    14 Oct 2016 marnanel   » (Journeyer)

    Deaf/HoH symbol in Unicode

    Question for [dD]eaf folk reading (forwarding is encouraged):

    I’m working on a proposal to add the standard deaf/HoH symbol to Unicode. I’m looking particularly for examples of its use in running text as a character, as in this mocked-up text:

    Can you help me find any? (All contributions used will be acknowledged in the submitted change request, of course.)

    This entry was originally posted at Please comment there using OpenID.

    Syndicated 2016-10-14 15:29:34 from Monument

    11 Oct 2016 philiph   » (Journeyer)

    9 Oct 2016 lloydwood   » (Journeyer)

    A couple of weeks ago I released SaVi 1.5.0 for simulating satellite constellations.

    SaVi has a few new tricks; it can load in satellite elsets from the web, and create animations of satellite movement.

    And it was pleasing to find that SaVi can be made to work on Windows 10 (Anniversary developer edition) via its Ubuntu package. I've never learned to build Windows applications, and that no longer matters!

    I've been working on and off with SaVi for twenty years, pretty much. Now, back to the occasional bugfixes.

    11 Oct 2016 marnanel   » (Journeyer)

    leaked audio

    "If audio of your private convo leaked, what would it say?"

    things I have said in private conversations recently:

    • "You know elephants? I read that they sing. I wonder what they sing about. Next time I meet one I'll play it music. Elephants are cool."
    • "So yeah, when I go to planning meetings they talk about the low-hanging fruit, the easy stuff, but it wouldn't be easy if I was a giraffe."
    • "Oh hey, there's a town called Makasar in Turkey. If I took an antimacassar there, they would both vanish. You're probably not allowed to."
    • "What if I put helium balloons in the wheelie bin? I think it would be a nice surprise for the dustmen when they opened the lid."
    • "And actually I was going to find a bin saying LITTER on it in town, and fill it with glitter, and add a G in front, but then I didn't."
    • "People have fish tanks but they never have duck tanks, why not?"
    • "It would be awesome to have a pet elk. They have beautiful antlers. I think you would need a litter tray the size of the kitchen."
    This entry was originally posted at Please comment there using OpenID.

    Syndicated 2016-10-11 10:54:24 from Monument

    6 Oct 2016 Pizza   » (Master)

    Mitsubishi CP-D70 family, working!

    Over the past few years, I've written a lot about the various members of the Mitsubishi CP-D70 family of printers, and how they were utterly worthless under Linux.

    These printers are unusual in that they required the host computer to perform perform gamma correction and thermal compensation to the image data in order to generate sane output. (If this sounds familiar, it's because the Sinfonia CS2 was similarly afflicted). This relied on unknown, proprietary algorithms only implemented within Mitsubishi's drivers.

    To make along story short, I've been attempting to reverse engineer those algorithms for nearly three years, and I'm pleased to announce that I've finally succeeded. While I can't promise the results are identical to Mitsubishi's own code, It is now possible to generate high-quality prints with these printers using entirely Free Software.

    The library is called 'libMitsuD70ImageReProcess', released to the public under a GPLv3+ license.

    Just to be absolutely clear, Mitsibushi is not responsible for this library in any way, and will not support you if you complain that the output is somehow deficient or your printer catches fire when you try to print pictures of Margaret Thatcher posing in her skivvies.

    Here's the list of the now-functional printers:

    • Mitsubishi CP-D70DW, CP-D707DW, CP-K60DW-S, CP-D80DW
    • Kodak 305
    • Fujifilm ASK-300

    While all of these models are expected to work, only the Kodak 305 has actually been tested. Please drop me an email or comment here if you have one of these printers and would like to help test things out.

    In particular, if there's someone out there with a dual-decker CP-D707DW, there are opportunities for further enhancements that I'd like to try.

    All code except for the library is now committed into Gutenprint, but is not yet part of any [pre-]release. So if you want to test this stuff out, you'll need to grab the latest Gutenprint code and libMitsu70ImageReProces out of git and compile/install them manually. Once this is a little more stable I'll package the library code in a tarball for simpler distribution.

    In other news, while the Kodak 305's official Windows drivers only expose 4x6 and 8x6 prints, the printer firmware also supports the 6x6, 4x62, and 2x62 sizes that the Mitsubishi CP-K60DW-S supports. You'll need to ensure you're using the 1.04 firmware! I have also received reports that the '305 will accept the K60's print media, so in theory 5x7 and 6x9 support is possible.

    Happy printing!

    Syndicated 2016-10-06 19:08:53 from I Dream of Rain (free_software)

    5 Oct 2016 sye   » (Journeyer)

    A crow waits on me

    #2 乌鸦等我
    * A crow waits on me

    One day, my parents will depart this world,
    my siblings may travel afar
    my dear wife, sooner or later, shall desert me
    as I am hanging on
    my irresistable downtrodding.

    But if on that day,
    there still sits a crow
    crowing on top of the television broadcasting tower
    it gives out a sound
    more piercing and cold than my sneer
    Then, there is hope that people see
    this ugliest crow, takes a sip of water
    after its long flight, and
    waits on me.

    @copyright 2016 LairdUnlimited
    syndicated from

    Syndicated 2016-10-05 16:28:00 (Updated 2016-10-05 16:29:07) from badvogato

    4 Oct 2016 philiph   » (Journeyer)

    Artifactory Systemd Logging

    New page

    Syndicated 2016-10-04 05:56:34 from HollenbackDotNet

    3 Oct 2016 mjg59   » (Master)

    The importance of paying attention in building community trust

    Trust is important in any kind of interpersonal relationship. It's inevitable that there will be cases where something you do will irritate or upset others, even if only to a small degree. Handling small cases well helps build trust that you will do the right thing in more significant cases, whereas ignoring things that seem fairly insignificant (or saying that you'll do something about them and then failing to do so) suggests that you'll also fail when there's a major problem. Getting the small details right is a major part of creating the impression that you'll deal with significant challenges in a responsible and considerate way.

    This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?

    And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.

    But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.

    There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.

    It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.

    One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.

    Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.

    comment count unavailable comments

    Syndicated 2016-10-03 17:14:27 from Matthew Garrett

    3 Oct 2016 StevenRainwater   » (Master)

    Walden by Henry David Thoreau

    Leaves beneath the ice of Walden Pond. Photo by Flickr user Bemep, CC BY-NC 2.0

    Leaves beneath the ice of Walden Pond. Photo by Flickr user Bemep, CC BY-NC 2.0

    I usually review pulp science fiction books, science books, even the occasional graphic novel, so a review of a classic like Walden may seem a bit out of place here. But I do try to read a little of everything including the classics and Walden has been on my reading list for a long time. The edition I chose is Walden and Other Writings, 2000, Modern Library Paperback Edition; partly because I also wanted to read Thoreau’s essay, Civil Disobedience, which is in this volume, but also because I like the cover art depicting a winter scene near Walden Pond. I admit, I’ve bought more than one book based solely on the cover art.

    I had vaguely thought that Walden was a work of philosophy resulting from Thoreau spending time alone pondering Life, The Universe, and Everything. It’s really nothing like that. It’s much more modern that I expected. Imagine reading a blog by someone who decides to give up television, WiFi, social media, modern technology and civilization in general as an experiment. Imagine this person finds some land by a lake and determines to live a DIY existence. They build their own tiny house from available materials, they eat only what they can find or grow, and make their own clothes. And they write weekly updates on their progress as they do all this. That’s basically what’s going on in Walden. It’s a DIY book mixed with some appreciation of nature.

    Thoreau doesn’t completely leave the world behind. He walks to town periodically to give lectures, his writings are published, he has frequent visitors. A lot of the townsfolk think he’s a bit odd and keep their distance but he interacts with a wide range of other eccentric characters: hunters in the woods, fishermen on the pond, rail workers from the railroad that passes near his tiny house, transients who wander through. When he can, he invites these random people into his house and questions them about the nature of the human race and civilization. The bravest strangers even taste some of the weird foods Thoreau subsists on.

    Some chapters are strictly DIY stuff like lists of materials used in building his tiny house and their costs. Or what he eats and how he obtains it. Other chapters are observations about nature – what animals he runs into, the sensory experience of the pond and woods in the different seasons. And there actually is a little bit of philosophy hidden away here and there; do humans really need to eat meat or would we be better off if we were all vegetarians? Should we be more self reliant? Why do we waste so much time and energy making money for things like clothes and homes that we could make ourselves much more simply?

    The book is laid out chronologically by seasons and takes the reader through the first year at Walden Pond. The first few chapters are the most interesting as they contain the parameters of his experiment and most of the details on how he builds his shelter and gathers his supplies. Later chapters tend to be his observations of nature once things have settled into a routine. Amusingly, the descriptive part of the book ends after the first year with the sentence, “Thus was my first year’s life in the woods completed; and the second year was similar to it.” The book, while interesting and sometimes profound, is not a page-turner and you’re probably be as glad as I was that he decides not to chronicle his second year as well.

    Thoreau doesn’t think everyone should give up on civilization and live as he did at Walden, of course. He clearly thinks of his two year adventure there as nothing more than an experiment to see what the minimum lifestyle could consist of. Just like modern writers who give up The Internet or some other modern convenience for a year, Thoreau fully intends to return to civilization when his experiment is done. Despite finding it a slow read and difficult to slog through at times, particularly in the second half, I still recommend it. There are more than enough interesting and enjoyable bits to make up for it.

    Syndicated 2016-10-02 23:53:54 from Steevithak of the Internet

    26 Sep 2016 louie   » (Master)

    Public licenses and data: So what to do instead?

    I just explained why open and copyleft licensing, which work fairly well in the software context, might not be legally workable, or practically a good idea, around data. So what to do instead? tl;dr: say no to licenses, say yes to norms.

    "Day 43-Sharing" by A. David Holloway, under CC BY 2.0.
    Day 43-Sharing” by A. David Holloway, under CC BY 2.0.

    Partial solutions

    In this complex landscape, it should be no surprise that there are no perfect solutions. I’ll start with two behaviors that can help.

    Education and lawyering: just say no

    If you’re reading this post, odds are that, within your organization or community, you’re known as a data geek and might get pulled in when someone asks for a new data (or hardware, or culture) license. The best thing you can do is help explain why restrictive “public” licensing for data is a bad idea. To the extent there is a community of lawyers around open licensing, we also need to be comfortable saying “this is a bad idea”.

    These blog posts, to some extent, are my mea culpa for not saying “no” during the drafting of ODbL. At that time, I thought that if only we worked hard enough, and were creative enough, we could make a data license that avoided the pitfalls others had identified. It was only years later that I finally realized there were systemic reasons why we were doomed, despite lots of hard work and thoughtful lawyering. These posts lay out why, so that in the future I can say no more efficiently. Feel free to borrow them when you also need to say no :)

    Project structure: collaboration builds on itself

    When thinking about what people actually want from open licenses, it is important to remember that how people collaborate is deeply impacted by factors of how your project is structured. (To put it another way, architecture is also law.) For example, many kernel contributors feel that the best reason to contribute your code to the Linux kernel is not because of the license, but because the high velocity of development means that your costs are much lower if you get your features upstream quickly. Similarly, if you can build a big community like Wikimedia’s around your data, the velocity of improvements is likely to reduce the desire to fork. Where possible, consider also offering services and collaboration spaces that encourage people to work in public, rather than providing the bare minimum necessary for your own use. Or more simply, spend money on community people, rather than lawyers! These kinds of tweaks can often have much more of an impact on free-riding and contribution than any license choice. Unfortunately, the details are often project specific – which makes it hard to talk about in a blog post! Especially one that is already too long.

    Solving with norms

    So if lawyers should advise against the use of data law, and structuring your project for collaboration might not apply to you, what then? Following Peter Desmet, Science Commons, and others, I think the right tool for building resilient, global communities of sharing (in data and elsewhere) is written norms, combined with a formal release of rights.

    Norms are essentially optimistic statements of what should be done, rather than formal requirements of what must be done (with the enforcement power of the state behind them). There is an extensive literature, pioneered by Nobelist Elinor Ostrom, on how they are actually how a huge amount of humankind’s work gets done – despite the skepticism of economists and lawyers. Critically, they often work even without the enforcement power of the legal system. For example, academia’s anti-plagiarism norms (when buttressed by appropriate non-legal institutional supports) are fairly successful. While there are still plagiarism problems, they’re fairly comparable to the Linux kernel’s GPL-violation problems – even though, unlike GPL, there is no legal enforcement mechanisms!

    Norms and licenses have similar benefits

    In many key ways, norms are not actually significantly different than licenses. Norms and licenses both can help (or hurt) a community reach their goals by:

    • Educating newcomers about community expectations: Collaboration requires shared understanding of the behavior that will guide that collaboration. Written norms can create that shared expectation just as well as licenses, and often better, since they can be flexible and human-readable in ways legally-binding international documents can’t.
    • Serving as the basis for social pressure: For the vast majority of collaborative projects, praise, shame, and other social nudges, not legal threats, are the actual basis for collaboration. (If you need proof of this, consider the decades-long success of open source before any legal enforcement was attempted.) Again, norms can serve this role just as well or not better, since it is often desire to cooperate and a fear of shaming that are what actually drive collaboration.
    • Similar levels of enforcement: While you can’t use the legal system to enforce a norm, most people and organizations also don’t have the option to use the legal system to enforce licenses – it is too expensive, or too time consuming, or the violator is in another country, or one of many other reasons why the legal system might not be an option (especially in data!) So instead most projects result to tools like personal appeals or threats of publicity – tools that are still available with norms.
    • Working in practice (usually): As I mentioned above, basing collaboration on social norms, rather than legal tools, work all the time in real life. The idea that collaboration can’t occur without the threat of legal sanction is really a somewhat recent invention. (I could actually have listed this under differences – since, as Ostrom teaches us, legal mechanisms often fail where norms succeed, and I think that is the case in data too.)

    Why are norms better?

    Of course, if norms were merely “as good as” licenses in the ways I just listed, I probably wouldn’t recommend them. Here are some ways that they can be better, in ways that address some of the concerns I raised in my earlier posts in this series:

    • Global: While [building global norms is not easy](, social norms based on appeals to the very human desires for collaboration and partnership can be a lot more global than the current schemes for protecting database or hardware rights, which aren’t international. (You can try to fake internationalization through a license, but as I pointed out in earlier posts, that is likely to fail legally, and be ignored by exactly the largest partners who you most want to get on board.)
    • Flexible: Many of the practical problems with licenses in data space boil down to their inflexibility: if a license presumes something to be true, and it isn’t, you might not be able to do anything about it. Norms can be much more generous – well-intentioned re-users can creatively reinterpret the rules as necessary to get to a good outcome, without having to ask every contributor to change the license. (Copyright law in the US provides some flexibility through fair use, which has been critical in the development of the internet. The EU does not extend such flexibility to data, though member states can add some fair dealing provisions if they choose. In neither case are those exceptions global, so they can’t be relied on by collaborative projects that aim to be global in scope.)
    • Work against, not with, the permission culture: Lessig warned us early on about “permission culture” – the notion that we would always need to ask permission to do anything. Creative Commons was an attempt to fight it, but by being a legal obligation, rather than a normative statement, it made a key concession to the permission culture – that the legal system was the right terrain to have discussions about sharing. The digital world has pretty whole-heartedly rejected this conclusion, sharing freely and constantly. As a result, I suspect a system that appeals to ethical systems has a better chance of long-term sustainability, because it works with the “new” default behavior online rather than bringing in the heavy, and inflexible, hand of the law.

    Why you still need a (permissive) license

    Norms aren’t enough if the underlying legal system might allow an early contributor to later wield the law as a threat. That’s why the best practice in the data space is to use something like the Creative Commons public domain grant (CC-Zero) to set a clear, reliable, permissive baseline, and then use norms to add flexible requirements on top of that. This uses law to provide reliability and predictability, and then uses norms to address concerns about fairness, free-riding, and effectiveness. CC-Zero still isn’t perfect; most notably it has to try to be both a grant and a license to deal with different international rules around grants.

    What next?

    In this context, when I say “norms”, I mean not just the general term, but specifically written norms that can act as a reference point for community members. In the data space, some good examples are DPLA’s “CCO-BY” and the Canadensys biodiversity initiative. A more subtle form can be found buried in the terms for NIH’s Clinical Trials database. So, some potential next steps, depending on where your collaborative project is:

    • If your community has informal norms (“attribution good! sharing good!”) consider writing them down like the examples above. If you’re being pressed to adopt a license (hi, Wikidata!), consider writing down norms instead, and thinking creatively about how to name and shame those who violate those norms.
    • If you’re an organization that publishes licenses, consider using your drafting prowess to write some standard norms that encapsulate the same behaviors without the clunkiness of database (or hardware) law. (Open Data Commons made some moves in this direction circa 2010, and other groups could consider doing the same.)
    • If you’re an organization that keeps getting told that people won’t participate in your project because of your license, consider moving towards a more permissive license + a norm, or interpreting your license permissively and reinforcing it with norms.

    Good luck! May your data be widely re-used and contributors be excited to join your project.

    Syndicated 2016-09-26 15:00:12 from Blog – Luis Villa: Open Law and Strategy

    26 Sep 2016 olea   » (Master)

    Retrospectiva HackLab Almería 2012-2015 y pico

    Este fin de semana tuve el privilegio de ser invitado por GDG Spain y en particular por ALMO para presentar en el Spanish GDG Summit 2016 la experiencia de la actividad en el HackLab Almería:

    Aunque llegué muy inseguro porque soy muy crítico con los que considero fracasos míos, al conocer las vicisitudes de los grupos locales del GDG comprobé que a nosotros no nos va tan mal y que tenemos experiencias muy interesantes para terceros.

    De paso me ha servido para reconsiderar parte del trabajo hecho y para documentar más claramente nuestras cosas para nuestra propia gente: creo que es buena idea que todos le demos un repaso.

    Es posible que haya algún error y alguna carencia. Todas las opiniones son absolutamente personales y no todo el mundo ha de compartirlas. No tengo tanto interés en discutir las afirmaciones como de corregir errores o inconsistencias. Tened presente de que no es una memoria completa de actividades porque eso sería enooorme, sólo una retrospectiva esquemática.

    Está escrita en formato de mapa-mental usando Freemind 1.0.1. El formato tal vez os parezca engorroso, pero las restricciones de tiempo y de presentación de la información no me han permitido nada mejor. Lamento las molestias. Podéis descargar el fichero que incluye el mapa desde aquí:

    PS: esta misma entrada ha sido publicada en el foro del HackLab Almería.

    Syndicated 2016-09-25 22:00:00 from Ismael Olea

    26 Sep 2016 hacker   » (Master)

    HOWTO: Back up your Android device with native rsync

    Recently, one of my Android devices stopped reading the memory card. Opening the device, the microSD card was so hot I couldn’t hold it in my hand. The battery on that corner of the device had started to swell slightly. I’ve used this device every day for 3+ years without any issues. Until this week. […]

    Related posts:
    1. HOWTO: Enable Docker API through firewalld on CentOS 7.x (el7) Playing more and more with Docker across multiple Linux distributions...
    2. HOWTO: Search and Destroy “Unlabeled” mail in Google Gmail I have over 14 years of email stored in Google...
    3. Tuesday Tip: rsync Command to Include Only Specific Files I find myself using rsync a lot, both for moving...

    Syndicated 2016-09-25 23:20:42 from random neuron misfires

    24 Sep 2016 LaForge   » (Master)

    (East) European motorbike tour on 20y old BMW F650ST

    For many years I've always been wanting to do some motrobike riding accross the Alps, but somehow never managed to do so. It seems when in Germany I've always been too busy - contrary to the many motorbike tours around and accross Taiwan which I did during my frequent holidays there.

    This year I finally took the opportunity to combine visiting some friends in Hungary and Bavaria with a nice tour starting from Berlin over Prague and Brno (CZ), Bratislava (SK) to Tata and Budapeest (HU), further along lake Balaton (HU) towards Maribor (SI) and finally accross the Grossglockner High Alpine Road (AT) to Salzburg and Bavaria before heading back to Berlin.

    It was eight fun (but sometimes long) days riding. For some strange turn of luck, not a single drop of rain was encountered during all that time, travelling accross six countries.

    The most interesting parts of the tour were:

    • Along the Elbe river from Pirna (DE) to Lovosice (CZ). Beautiful scenery along the river valey, most parts of the road immediately on either side of the river. Quite touristy on the German side, much more pleaant and quiet on the Czech side.
    • From Mosonmagyarovar via Gyor to Tata (all HU). Very little traffic alongside road '1'. Beatutil scenery with lots of agriculture and forests left and right.
    • The Nothern coast of Lake Balaton, particularly from Tinany to Keszthely (HU). Way too many tourists and traffic for my taste, but still very impressive to realize how large/long that lake really is.
    • From Maribor to Dravograd (SI) alongside the Drau/Drav river valley.
    • Finally, of course, the Grossglockner High Alpine Road, which reminded me in many ways of the high mountain tours I did in Taiwan. Not a big surprise, given that both lead you up to about 2500 meters above sea level.

    Finally, I have to say I've been very happy with the performancee of my 1996 model BMW F 650ST bike, who has coincidentially just celebrated its 20ieth anniversary. I know it's an odd bike design (650cc single-cylinder with two spark plugs, ignition coils and two carburetors) but consider it an acquired taste ;)

    I've also published a map with a track log of the trip

    In one month from now, I should be reporting from motorbike tours in Taiwan on the equally trusted small Yamaha TW-225 - which of course plays in a totally different league ;)

    Syndicated 2016-08-16 14:00:00 from LaForge's home page

    22 Sep 2016 pixelbeat   » (Journeyer)


    Enhance introspection at the python interactive prompt.

    Syndicated 2016-09-22 12:23:33 from

    21 Sep 2016 wingo   » (Master)

    is go an acceptable cml?

    Yesterday I tried to summarize the things I know about Concurrent ML, and I came to the tentative conclusion that Go (and any Go-like system) was an acceptable CML. Turns out I was both wrong and right.

    you were wrong when you said everything's gonna be all right

    I was wrong, in the sense that programming against the CML abstractions lets you do more things than programming against channels-and-goroutines. Thanks to Sam Tobin-Hochstadt to pointing this out. As an example, consider a little process that tries to receive a message off a channel, and times out otherwise:

    func withTimeout(ch chan int, timeout int) (result int) {
      var timeoutChannel chan int;
      var msg int;
      go func() {
        timeoutChannel <- 0
      select {
        case msg = <-ch: return msg;
        case msg = <-timeoutChannel: return 0;

    I think that's the first Go I've ever written. I don't even know if it's syntactically valid. Anyway, I think we see how it should work. We return the message from the channel, unless the timeout happens before.

    But, what if the message is itself a composite message somehow? For example, say we have a transformer that reads a value from a channel and adds 1 to it:

    func onePlus(in chan int) (result chan int) {
      var out chan int
      go func () { out <- 1 + <-in }()
      return out

    What if we do a withTimeout(onePlus(numbers), 0)? Assume the timeout fires first and that's the result that select chooses. There's still that onePlus goroutine out there trying to read from in and at some point probably it will succeed, but nobody will read its value. At that point the number just vanishes into the ether. Maybe that's OK in certain domains, but certainly not in general!

    What CML gives you is the ability to express an event (which is kinda like a possibility of sending or receiving a message on a channel) in such a way that we don't run into this situation. Specifically with the wrap combinator, we would make an event such that receiving on numbers would run a function on the received message and return that as the message value -- which is of course the same as what we have, except that in CML the select wouldn't actually read the message off unless it select'd that channel for input.

    Of course in Go you could just rewrite your program, so that the select statement looks like this:

    select {
      case msg = <-ch: return msg + 1;
      case msg = <-timeoutChannel: return 0;

    But here we're operating at a lower level of abstraction; we were forced to intertwingle our concerns of adding 1 and our concerns of timeout. CML is more expressive than Go.

    you were right when you said we're all just bricks in the wall

    However! I was right in the sense that you can build a CML system on top of Go-like systems (though possibly not Go in particular). Thanks to Vesa Karvonen for this comment and the link to their proof-of-concept CML implementation in Clojure's core.async. I understand Vesa also has an implementation in F# as well.

    Folks should read Vesa's code, after reading the Reppy papers of course; it's delightfully short and expressive. The basic idea is that event composition operators like choose and wrap build up data structures instead of doing things. The sync operation then grovels through those data structures to collect a list of channels to pass on to core.async's equivalent of select. When select returns, sync determines which event that chosen channel and message corresponds to, and proceeds to "activate" the event (and, as a side effect, possibly issue NACK messages to other channels).

    Provided you can map from the chosen select channel/message back to the event, (something that core.async can mostly do, with a caveat; see the code), then you can build CML on top of channels and goroutines.

    o/~ yeah you were wrong o/~

    On the other hand! One advantage of CML is that its events are not limited to channel sends and receives. I understand that timeouts, thread joins, and maybe some other event types are first-class event kinds in many CML systems. Michael Sperber, current Scheme48 maintainer and functional programmer, tells me that simply wrapping events in channels+goroutines works but can incur a big performance overhead relative to supporting those event types natively, due to the need to make the new goroutine and channel and the scheduling costs. He quotes 10X as the overhead!

    So although CML and Go appear to be inter-expressible, maybe a proper solution will base the simple channel send/receive interface on CML rather than the other way around.

    Also, since these events are now second-class, it must be OK to lose these events, for the same reason that the naïve withTimeout could lose a message from numbers. This is the case for timeouts usually but maybe you have to think about this more, and possibly provide an infinite stream of the message. (Of course the wrapper goroutine would be collected if the channel becomes unreachable.)

    you were right when you said this is the end

    I've long wondered how contemporary musicians deal with the enormous, crushing weight of recorded music. I don't really pick any more but hoo am I feeling this now. I think for Guile, I will continue hacking on fibers in a separate library, and I think that things will remain that way for the next couple years and possibly more. We need more experience and more mistakes before blessing and supporting any particular formulation of highly concurrent programming. I will say though that I am delighted that we are able to actually do this experimentation on a library level and I look forward to seeing what works out :)

    Thanks again to Vesa, Michael, and Sam for sharing their time and knowledge; all errors are of course mine. Happy hacking!

    Syndicated 2016-09-21 21:29:15 from wingolog

    New Advogato Features

    New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

    Keep up with the latest Advogato features by reading the Advogato status blog.

    If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

    Advogato User Stats

    New Advogato Members

    Recently modified projects

    28 Sep 2016 Geomview
    28 Sep 2016 SaVi
    14 Jun 2016 luxdvd
    8 Mar 2016 ShinyCMS
    8 Feb 2016 OpenBSC
    5 Feb 2016 Abigail
    29 Dec 2015 mod_virgule
    19 Sep 2015 Break Great Firewall
    20 Jul 2015 Justice4all
    25 May 2015 Molins framework for PHP5
    25 May 2015 Beobachter
    7 Mar 2015 Ludwig van
    7 Mar 2015 Stinky the Shithead
    18 Dec 2014 AshWednesday
    11 Nov 2014 respin

    New projects

    8 Mar 2016 ShinyCMS
    5 Feb 2016 Abigail
    2 Dec 2014 Justice4all
    11 Nov 2014 respin
    8 Mar 2014 Noosfero
    17 Jan 2014 Haskell
    17 Jan 2014 Erlang
    17 Jan 2014 Hy
    17 Jan 2014 clj-simulacrum
    17 Jan 2014 Haskell-Lisp
    17 Jan 2014 lfe-disco
    17 Jan 2014 clj-openstack
    17 Jan 2014 lfe-openstack
    17 Jan 2014 LFE
    1 Nov 2013 FAQ Linux