Older blog entries for benad (starting at number 126)

Flattening Circular Buffers

A few weeks ago I discovered TPCircularBuffer, a circular buffer implementation for Darwin operating system implementations, including Mac OS X and iOS. Now, I’ve implemented circular buffers before, so I though there wasn’t much need for yet another circular buffer implementation (let alone one specific to iOS), until I noticed something very interesting in the code.

A trick TPCircularBuffer uses is to map two adjacent memory blocks to the same buffer. The buffer holds the actual data, and the virtual memory manager ensures that both maps contain the exact same data, since effectively both virtual memory blocks remaps to the same memory. This makes things a lot easier than my naive implementations: Rather than dealing with convoluted pointer arithmetics each time the producer or consumer reads or writes a sequence of values that cross the end of the buffer, a simple linear read or write works. In fact, the pointers from that doubly-mapped memory can be safely given to any normal function that accepts a pointer, removing the need to make memory copies before each use of the buffer by an external function.

In fact, this optimization is so common that a previous version of the Wikipedia page for circular buffers had some sample code using common POSIX functions. There’s even a 10-year-old VRB - Virtual Ring Buffer library for Linux systems. As for Windows, I’ve yet to seen some good sample code, but you can do the equivalent with CreateFileMapping and MapViewOfFile.

Both Wikipedia’s and VRB’s implementations can be misleading, and not very portable though. On Darwin, and I suspect BSD and many other systems, the mapped memory must be fully aligned to the size of a memory page (”allocation granularity” in Windows terms). On POSIX, that means using the value of sysconf(_SC_PAGESIZE). Since most of the times the page size is a power of 2, that could explain the otherwise strange buffer->count_bytes = 1UL << order from Wikipedia’s sample code.

By the way, I’d like to reiterate how poor the built-in Mac OS X documentation is for POSIX and UNIX-like functions. Though it does warn pretty well about page size alignment and the risks involved with MAP_FIXED of mmap, the rest of the documentation fails to mention how to set permissions of the memory map. Thankfully, the latest Linux man pages for the same functions are far better documented.

Syndicated 2016-05-23 18:01:52 from Benad's Blog

The Static Blog

A quick note to mention that I added The Static Blog to my main web site, discussing the relocation of the blog you’re reading right now to this site.

Syndicated 2016-04-27 01:11:57 from Benad's Blog

Final Blog Move?

This is a short post to mention that my blog, originally hosted on Squarespace as blog.benad.me has now moved alongside my main web site under benad.me/blog.

All links to each post and the RSS feed should now automatically redirect to the new location. This may have created some duplicates in your feed reader.

There is still some clean up to do to make the older posts look better, and I need to post a longer article to explain the rationale behind this, but for now everything should be working fine. The blog.benad.me domain will remain active as an option if something were to happen, but for the time being my blog should remain where it is.

Syndicated 2016-03-29 19:51:33 from Benad's Blog

Amazon Cloud Drive Backend for Duplicity

To follow up on my previous post, I wrote acdbackend, to add Amazon Cloud Drive support to duplicity, by wrapping around the acd_cli tool.

Actually, adding online storage services to duplicity is pleasantly easy to implement. If you have some command-line tool for your service that supports download and upload using stdin/stdout, file listing, and deleting files, then in an hour you can have it working in duplicity.

Syndicated 2016-03-03 02:16:19 from Benad's Blog

DIY Backup

While in the past I did recommend CrashPlan as an online backup solution, I stopped using it in December. At first I used it because their multi-year, unlimited plans had reasonable prices, and was the only online backup (back in 2011) that had client-side encryption support. But over the years I ran into multiple major issues. In 2013, they started excluding backing up iOS backups done in iTunes in a transparent background update, even though they would publicly say otherwise. In fact, that hidden file exclusion list was pushed from their "enterprise" version, which has now moved to a much nicer version 5 while they abandoned their home users to version 4. In November, their Linux client started requiring Java 1.7, so my older client running in 1.6 kept downloading updates and failing to install it until the hard drive was full. Their pricing just kept increasing over time, making it difficult for me to keep reniewing.

I moved to iDrive, which is half the cost of CrashPlan and works pretty well, though I'm still a bit worried that I have to trust client-side encryption to some closed-source software. Also, if you back up anything beyond 1 TB their pricing becomes punitive.

The main issue that I have with all those backup services is that your backups become locked-in into online storage plans that are more expensive than competing generic cloud storage providers, and your valuable backups are held hostage if they increase their pricing. Even tarsnap, with its open-source client for the paranoid, locks you in an expensive storage plan, since the client requires some closed-source server software that only they host. I miss the days of older backup software like MobileMe Backup, where the backup software was somewhat separate from the actual storage solution.

Arq Backup looks more like this traditional backup software I was looking for. It can back up to a handful of cloud storage providers, with different pricing models, and many with free initial storage plans if you have small backups. The software is $40 per machine, and then you're free to pick any support cloud storage. The software isn't open-source, but the recovery software is open-source and documented, so you can vouch for its encryption to some extent.

But what if you're on Linux, or insist on an open-source solution (especially for the encryption part)? If you simply want to back up some files once, with no history, you can combine encfs, in "reverse" mode to have an encrypted view of your existing files, with Rclone. Note that with this approach extended file information may be lost in the transfer. If you want a more thourough versioned backup solution, Duplicity should work fine. It encrypts the files with GPG, and does file-level binary deltas to make backup files as small as possible. If duplicity doesn't support your cloud storage directly, you can store the backups to disk and sync them with Rclone. To make using Duplicity easier, you can also use the wrapper tool duply.

As for what cloud storage provider to use, it depends on your needs. If you can fit your backups in less than about 15 GB, you can use the free version of Google Drive. If you want a flexible pricing and good performance at the lowest cost, Google Nearline looks like a great deal at 0.01$ per GB per month. If you already have Office 365, then you already have 1 TB of OneDrive, though downloads can be a bit slow. The Unlimited plan of Amazon Cloud Drive has good transfer speeds and is worry-free, though Duplicity doesn't support it.

Syndicated 2016-02-25 03:56:28 from Benad's Blog

KeePass: Password Management Apps

Like many others, I'm a bit worried about the LogMeIn acquisition of LastPass. While they haven't drastically increased the pricing of LastPass (yet), it would be a good idea to look at other options.

A recommended option for open-source password management that keeps being mentioned is KeePass, a .NET application that manages encrypted passwords and secure notes. While it's mostly made for Windows, it does work, though clumsily, on Mac using Mono. Even when using the stable version of Mono, the experience is clunky: Most keyboard shortcuts don't work, double-clicking on an items crashes the software half the time, and it generally looks horrible. Still, once you learn avoid those Mono bugs, or you simply use that Windows virtual machine you have hanging around your copy of VirtualBox, KeePass is a great tool.

There is a more "native" port of KeePass called KeePassX (as in, made for X.org ). This one works much better on Macs, but has far less features than the .NET version.

As for portable versions, there are of course a dozen or so different options for Android, so I haven't explored that yet. For iOS, the best free option seems to be limited to MiniKeePass. It doesn't sync automatically to any online storage, but transferring password database files in and out is simple enough that it should be acceptable if you only sparingly create new secure items on iOS.

Speaking of syncing, KeePass is server-less, as it only deals with database files. What can be done though with the desktop KeePass is synchronize two password database files with each other easily. The databases do keep track of the history of changes for each item, so that offline file synchronization is quite safe.

Scripting options seem to be limited. I found a Perl module File::KeyPass, but it has a quite large bug that needs to be patched with a proper implementation of Salsa20.

There is also a 20-days old new KeePass-compatible app that is entirely done in pure HTML and JavaScript called KeeWeb. It can be served up as a single static HTML page on any HTTPS server, and no server side code is needed. It can also work as a standalone desktop application. It is too new for me to recommend it (a new release was done as I was typing this), but in my limited tests, it worked amazingly well. For example, I was able to load and save from OneDrive my test KeePass file using Safari on my iPhone 6. Once it matures, it may even replace MiniKeePass as my recommended iOS KeePass app.

The fact that the original KeePass code was clean and documented enough to allow for so many different implementations means that using KeePass is pretty much "future proof", unlike any online password service. Sure, browser plugin options are limited and there's no automatic synchronization, but I would fully trust it.

Syndicated 2015-11-10 01:01:10 from Benad's Blog

The Twilight Zone: Top 10 Episodes (Spoilers-Free)

Last week, I discovered that I now have access (legally) to the 1960s series of The Twilight Zone, just in time for this "Halloween month". Unwilling to watch all 5 seasons, I looked online for the "best 10 episodes". Doing so was problematic, and risky for those like me that are new to the series.

First, there are 156 episodes, so it isn't likely that you'll get a good consensus on what is the best ten. The IMDB episode ratings may be the closest thing to a consensus, but it's unlikely that everybody that rated episodes watched the full series. Looking at individual top-10 lists, personal preference in the kind of episode they prefer also creates a bias.

Second, though it's not that much of a big problem, is that most of those top-10 lists mention only the episode titles, not their number (season and episode-of-season number). I don't want to scan through the full 156 episode list to find a matching title each time I want to watch an episode.

Finally, and the biggest issue, is that most of those lists not only list the episode titles, but also a description of what happened in the episode, and sometimes even a screenshot of the episode that spoils the whole twist.

So I went through ten of those lists, giving more weight to the IMDB one, and here's the result. No spoilers, just the title and episode numbers.

  1. The Eye of the Beholder S02E06
  2. Time Enough at Last S01E08
  3. It's a Good Life S03E08
  4. The Monsters Are Due on Maple Street S01E22
  5. Nightmare at 20,000 Feet S05E03
  6. To Serve Man S03E24
  7. Walking Distance S01E05
  8. Living Doll S05E06
  9. The Invaders S02E15
  10. Will the Real Martian Please Stand Up? S02E28

Notice that none on these are part of season 4, which had hour-long episodes rather than 30 minutes.

Also, a few honourable mentions that showed up high enough is many lists.

  1. A Stop at Willoughby S01E30
  2. The Hitch-Hiker S01E16
  3. Five Characters in Search of an Exit S03E14
  4. Twenty-Two S02E17
  5. Long-Distance Call S02E22
  6. Nick of Time S02E07
  7. The Obsolete Man S02E29
  8. The Masks S05E25

Syndicated 2015-10-11 01:40:03 from Benad's Blog

Exploring Reactive Programming

A few months ago, I discovered the oddly-named JavaScript library "bacon.js". Essentially, it lets you declare and compose event channels. While it seems overly abstract, the sample code intrigued me, as it introduced me to what is called "reactive programming".

Let's put this in context of typical UI programming. Let's say you want to write a GUI with a button that initiates a file download. You can't simply make the download synchronous, as it would "freeze" the entire GUI. The classical way to handle that is to create a new background thread that executes the download and also sends appropriate GUI events to display the state of the download.

The problem with that approach is that if you change the GUI, you now have to not only change the code that gets called when the download button is clicked, but also all the GUI updates done by the background thread. The GUI logic is intermingled with all the logic to start the thread, also intermingled with the download logic proper.

In more modern concurrency interfaces, the GUI code can spawn a new "Future", and describe what should happen when it completes outside of the code the Future will execute. This works well if the GUI doesn't have a download progress bar of some kind, and makes the download logic free of GUI logic. Still, this is risky: If for any reason the GUI vanishes (window closed, etc.), there is no easy way for the code related to the button click to describe how to cancel the background download, and when.

This is where "event channels" come into play. The most known implementation are the UNIX shells, where you would "pipe" one process' output to another's input. If the first process is terminated, the second process will get an interruption event when it attempts to read from the pipe, which by default cause the second process to be terminated. This is an easy way to create process groups, without having to explicitly tell the kernel about it.

Similar "process group" patterns exist in programming languages that support event communication between pseudo-processes or threads, for example the OTP Supervisor in Erlang.

Even will all of this (Futures, process pipes, supervisors), there are still a few things missing to make the implementation of a GUI download button simpler. First, there is no easy way to connect changes to mutable values, if the download button is active or not for example, to an event channel, and vice-versa. Basically, we need some kind of observer pattern, but applied to event handling. This has been my main gripe about MVC since, well, a long time ago. Also, there is no easy way to compose event channels together, even something as simple as aggregating multiple channel sources together into a new channel. While all that isn't terribly new in the networking world, with things like ZeroMQ and so on, in a programming environment without unreliability inherent in networking and no need for an interoperable packet-oriented stack, combining "networking" events together as a design pattern is quite compelling.

Hence why I was intrigued by bacon.js. It was inspired by the more comprehensive RxJs by Microsoft, and complements the React JavaScript library by Facebook. In fact, there even is a reactive programming manifesto, though it may be more the result of consultants hungry for the next wave of buzzwords than anything else. Still, it feels like what Aspect-Oriented Programming did to the Inversion of Control pattern, but applied to asynchronous event-based programming, which is to say that it brings it to a whole new level.

Syndicated 2015-09-15 23:46:57 from Benad's Blog

10s Everywhere

So recently I installed Windows 10 on my MacBook Pro alongside Mac OS X Yosemite 10.10. If you're keeping count, that's four 10s.

Upgrading Windows 8.1 to 10 was a strange experience. First, the Windows 10 notification icon never showed up. Looking at the Event Logs of GWX.exe ("Get Windows 10", I guess), it kept crashing with "data is invalid" for the past few months. Yet, the same logs showed clearly that my license was valid and ready to be upgraded to 10. Luckily, Microsoft now offers the Windows 10 ISO download, and the software used to download and "burn" a USB key also allowed for an in-place upgrade with no need for a USB key or DVD.

Yet, after the upgrade, I noticed that all network connections were disabled. Yes, the Boot Camp drivers were installed correctly, and Windows insisted the drivers were correctly working, but it's as if the entire TCP stack was removed. I tried everything for a few hours, getting lost in regedit, so I gave up and used the option to revert back to Windows 8.1. Once back, it was now worse, with even all keyboards disabled.

Before reverting back to 8.1, I attempted to remove all 3rd-party software that could have an impact on the network, including an old copy of the Cisco VPN client and the Avast anti-virus. The Cisco VPN client refused to be uninstalled for some reason. Back on 8.1, I could easily remove the VPN client (using the on-screen keyboard), but it's as if 8.1 kept trace of the Avast install even though Avast was not there anymore. Luckily, I found the download link to the full offline Avast 2015 installer in the user forums. After doing so, both the keyboard and the network were enabled.

Having learned that VPN and Anti-virus software can break things in Windows 10, I uninstalled all of these, and then upgraded to 10 again. I had to reinstall the Boot Camp drivers for my model of MacBook Pro, and this time everything was working fine. I could restore easily Avast, but the old Cisco VPN driver clearly couldn't work anymore. This isn't a big issue, since I keep a Windows 7 virtual machine for that.

What about using Boot Camp in a virtual machine? Well, there are two workarounds I had to do to make it work with Parallels Desktop. First, Article ID 122808 describes how to patch the file C:\Windows\inf\volume.inf so that Parallels can detect the Windows 10 partition. It just so happens that I already had my copy of Paragon NTFS for Mac, so changing the file when booted in the Mac partition was easy. Then, from Article ID 116582, since I'm using a 64-bit EFI installation of Windows, I had to run their strange bootcamp-enable-efi.sh script. It needs administrator privileges, so I temporarily enabled that on my user account to run it. After all of this, Windows got a bit confused about the product activation, but after a few reboots between native and virtual machine modes, it somehow picked up the activation.

So, what about Windows 10 itself? For me, It worked fine. It isn't a huge upgrade compared to Windows 8.1, but it's more usable in a desktop environment. For Windows 7 users, I would definitively recommend it, maybe after a few months until they fix the remaining bugs. As usual, backing up your files is highly recommended (even if you don't upgrade).

Syndicated 2015-08-02 19:57:31 from Benad's Blog

A Code's First Draft

Incremental software development, or evolutions of it, is now pretty much the standard approach, as we now expect requirements to be changed all the time. But this too easily leads to "over engineering", as since we expect change at all times, we spend too much effort on maximizing the flexibility of the code over any other quality.

I admit that in the past I too fell into the trap of over engineering my code, for the sake of "beautiful design" over functionality, making the code far too unnecessarily difficult to understand. From that experience, I now make incremental design changes more reactively.

Practically, it means that I always make a "first draft" of my code with minimal design, and then, based on that experience, make a second draft with a first draft of the design, all that before the first wave of requirements changes. This is quite different than software prototyping, where the first iteration is expected to be deleted or completely rewritten over time. In my case, most of the code of the first draft remains, but moved and refactored to fit the first design change.

The first code draft is done primarily as a proof of concept that demonstrates feasibility, to reduce future risk as much as possible. That way, regardless of future design or functional changes, at least we have a simple functional version of the code. That first draft could even be used as some "sample pseudo-code" to document the functional mechanism of the code, outside of the design and architectural complexities that are added later on as the software grows. That implies that the first code draft should be so clear and simple that it is (almost fully) self-documented.

Secondarily, it helps in making worthwhile design decisions early. Once you have working code, it's easier to see what design patterns would be useful, and precisely where. You can see in context the costs and benefits of each design pattern, and only those that are worth it are applied as a first design iteration. Once additional features are added or existing one changed some new design decisions may be needed, but if by the time of the second draft you have sound code and design, it will be easier to adapt than if you greedily made inappropriate or unnecessary design choice.

At some point, though, the extra effort in doing design changes on top of purely functional coding changes may be too costly if requirements changes are chaotic or indisciplined. This may be why so many programmers invest in design upfront while they have the chance, dooming the code to over engineering. The software engineers may be the only ones in the software development process that can present (and defend) the impact of endless changes on quality (bad code, inappropriate design, etc.), so over-design may be indicative of greater organizational issues.

Syndicated 2015-06-01 02:18:25 from Benad's Blog

117 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!