Older blog entries for benad (starting at number 122)

DIY Backup

While in the past I did recommend CrashPlan as an online backup solution, I stopped using it in December. At first I used it because their multi-year, unlimited plans had reasonable prices, and was the only online backup (back in 2011) that had client-side encryption support. But over the years I ran into multiple major issues. In 2013, they started excluding backing up iOS backups done in iTunes in a transparent background update, even though they would publicly say otherwise. In fact, that hidden file exclusion list was pushed from their "enterprise" version, which has now moved to a much nicer version 5 while they abandoned their home users to version 4. In November, their Linux client started requiring Java 1.7, so my older client running in 1.6 kept downloading updates and failing to install it until the hard drive was full. Their pricing just kept increasing over time, making it difficult for me to keep reniewing.

I moved to iDrive, which is half the cost of CrashPlan and works pretty well, though I'm still a bit worried that I have to trust client-side encryption to some closed-source software. Also, if you back up anything beyond 1 TB their pricing becomes punitive.

The main issue that I have with all those backup services is that your backups become locked-in into online storage plans that are more expensive than competing generic cloud storage providers, and your valuable backups are held hostage if they increase their pricing. Even tarsnap, with its open-source client for the paranoid, locks you in an expensive storage plan, since the client requires some closed-source server software that only they host. I miss the days of older backup software like MobileMe Backup, where the backup software was somewhat separate from the actual storage solution.

Arq Backup looks more like this traditional backup software I was looking for. It can back up to a handful of cloud storage providers, with different pricing models, and many with free initial storage plans if you have small backups. The software is $40 per machine, and then you're free to pick any support cloud storage. The software isn't open-source, but the recovery software is open-source and documented, so you can vouch for its encryption to some extent.

But what if you're on Linux, or insist on an open-source solution (especially for the encryption part)? If you simply want to back up some files once, with no history, you can combine encfs, in "reverse" mode to have an encrypted view of your existing files, with Rclone. Note that with this approach extended file information may be lost in the transfer. If you want a more thourough versioned backup solution, Duplicity should work fine. It encrypts the files with GPG, and does file-level binary deltas to make backup files as small as possible. If duplicity doesn't support your cloud storage directly, you can store the backups to disk and sync them with Rclone. To make using Duplicity easier, you can also use the wrapper tool duply.

As for what cloud storage provider to use, it depends on your needs. If you can fit your backups in less than about 15 GB, you can use the free version of Google Drive. If you want a flexible pricing and good performance at the lowest cost, Google Nearline looks like a great deal at 0.01$ per GB per month. If you already have Office 365, then you already have 1 TB of OneDrive, though downloads can be a bit slow. The Unlimited plan of Amazon Cloud Drive has good transfer speeds and is worry-free, though Duplicity doesn't support it.

Syndicated 2016-02-25 03:56:28 from Benad's Blog

KeePass: Password Management Apps

Like many others, I'm a bit worried about the LogMeIn acquisition of LastPass. While they haven't drastically increased the pricing of LastPass (yet), it would be a good idea to look at other options.

A recommended option for open-source password management that keeps being mentioned is KeePass, a .NET application that manages encrypted passwords and secure notes. While it's mostly made for Windows, it does work, though clumsily, on Mac using Mono. Even when using the stable version of Mono, the experience is clunky: Most keyboard shortcuts don't work, double-clicking on an items crashes the software half the time, and it generally looks horrible. Still, once you learn avoid those Mono bugs, or you simply use that Windows virtual machine you have hanging around your copy of VirtualBox, KeePass is a great tool.

There is a more "native" port of KeePass called KeePassX (as in, made for X.org ). This one works much better on Macs, but has far less features than the .NET version.

As for portable versions, there are of course a dozen or so different options for Android, so I haven't explored that yet. For iOS, the best free option seems to be limited to MiniKeePass. It doesn't sync automatically to any online storage, but transferring password database files in and out is simple enough that it should be acceptable if you only sparingly create new secure items on iOS.

Speaking of syncing, KeePass is server-less, as it only deals with database files. What can be done though with the desktop KeePass is synchronize two password database files with each other easily. The databases do keep track of the history of changes for each item, so that offline file synchronization is quite safe.

Scripting options seem to be limited. I found a Perl module File::KeyPass, but it has a quite large bug that needs to be patched with a proper implementation of Salsa20.

There is also a 20-days old new KeePass-compatible app that is entirely done in pure HTML and JavaScript called KeeWeb. It can be served up as a single static HTML page on any HTTPS server, and no server side code is needed. It can also work as a standalone desktop application. It is too new for me to recommend it (a new release was done as I was typing this), but in my limited tests, it worked amazingly well. For example, I was able to load and save from OneDrive my test KeePass file using Safari on my iPhone 6. Once it matures, it may even replace MiniKeePass as my recommended iOS KeePass app.

The fact that the original KeePass code was clean and documented enough to allow for so many different implementations means that using KeePass is pretty much "future proof", unlike any online password service. Sure, browser plugin options are limited and there's no automatic synchronization, but I would fully trust it.

Syndicated 2015-11-10 01:01:10 from Benad's Blog

The Twilight Zone: Top 10 Episodes (Spoilers-Free)

Last week, I discovered that I now have access (legally) to the 1960s series of The Twilight Zone, just in time for this "Halloween month". Unwilling to watch all 5 seasons, I looked online for the "best 10 episodes". Doing so was problematic, and risky for those like me that are new to the series.

First, there are 156 episodes, so it isn't likely that you'll get a good consensus on what is the best ten. The IMDB episode ratings may be the closest thing to a consensus, but it's unlikely that everybody that rated episodes watched the full series. Looking at individual top-10 lists, personal preference in the kind of episode they prefer also creates a bias.

Second, though it's not that much of a big problem, is that most of those top-10 lists mention only the episode titles, not their number (season and episode-of-season number). I don't want to scan through the full 156 episode list to find a matching title each time I want to watch an episode.

Finally, and the biggest issue, is that most of those lists not only list the episode titles, but also a description of what happened in the episode, and sometimes even a screenshot of the episode that spoils the whole twist.

So I went through ten of those lists, giving more weight to the IMDB one, and here's the result. No spoilers, just the title and episode numbers.

  1. The Eye of the Beholder S02E06
  2. Time Enough at Last S01E08
  3. It's a Good Life S03E08
  4. The Monsters Are Due on Maple Street S01E22
  5. Nightmare at 20,000 Feet S05E03
  6. To Serve Man S03E24
  7. Walking Distance S01E05
  8. Living Doll S05E06
  9. The Invaders S02E15
  10. Will the Real Martian Please Stand Up? S02E28

Notice that none on these are part of season 4, which had hour-long episodes rather than 30 minutes.

Also, a few honourable mentions that showed up high enough is many lists.

  1. A Stop at Willoughby S01E30
  2. The Hitch-Hiker S01E16
  3. Five Characters in Search of an Exit S03E14
  4. Twenty-Two S02E17
  5. Long-Distance Call S02E22
  6. Nick of Time S02E07
  7. The Obsolete Man S02E29
  8. The Masks S05E25

Syndicated 2015-10-11 01:40:03 from Benad's Blog

Exploring Reactive Programming

A few months ago, I discovered the oddly-named JavaScript library "bacon.js". Essentially, it lets you declare and compose event channels. While it seems overly abstract, the sample code intrigued me, as it introduced me to what is called "reactive programming".

Let's put this in context of typical UI programming. Let's say you want to write a GUI with a button that initiates a file download. You can't simply make the download synchronous, as it would "freeze" the entire GUI. The classical way to handle that is to create a new background thread that executes the download and also sends appropriate GUI events to display the state of the download.

The problem with that approach is that if you change the GUI, you now have to not only change the code that gets called when the download button is clicked, but also all the GUI updates done by the background thread. The GUI logic is intermingled with all the logic to start the thread, also intermingled with the download logic proper.

In more modern concurrency interfaces, the GUI code can spawn a new "Future", and describe what should happen when it completes outside of the code the Future will execute. This works well if the GUI doesn't have a download progress bar of some kind, and makes the download logic free of GUI logic. Still, this is risky: If for any reason the GUI vanishes (window closed, etc.), there is no easy way for the code related to the button click to describe how to cancel the background download, and when.

This is where "event channels" come into play. The most known implementation are the UNIX shells, where you would "pipe" one process' output to another's input. If the first process is terminated, the second process will get an interruption event when it attempts to read from the pipe, which by default cause the second process to be terminated. This is an easy way to create process groups, without having to explicitly tell the kernel about it.

Similar "process group" patterns exist in programming languages that support event communication between pseudo-processes or threads, for example the OTP Supervisor in Erlang.

Even will all of this (Futures, process pipes, supervisors), there are still a few things missing to make the implementation of a GUI download button simpler. First, there is no easy way to connect changes to mutable values, if the download button is active or not for example, to an event channel, and vice-versa. Basically, we need some kind of observer pattern, but applied to event handling. This has been my main gripe about MVC since, well, a long time ago. Also, there is no easy way to compose event channels together, even something as simple as aggregating multiple channel sources together into a new channel. While all that isn't terribly new in the networking world, with things like ZeroMQ and so on, in a programming environment without unreliability inherent in networking and no need for an interoperable packet-oriented stack, combining "networking" events together as a design pattern is quite compelling.

Hence why I was intrigued by bacon.js. It was inspired by the more comprehensive RxJs by Microsoft, and complements the React JavaScript library by Facebook. In fact, there even is a reactive programming manifesto, though it may be more the result of consultants hungry for the next wave of buzzwords than anything else. Still, it feels like what Aspect-Oriented Programming did to the Inversion of Control pattern, but applied to asynchronous event-based programming, which is to say that it brings it to a whole new level.

Syndicated 2015-09-15 23:46:57 from Benad's Blog

10s Everywhere

So recently I installed Windows 10 on my MacBook Pro alongside Mac OS X Yosemite 10.10. If you're keeping count, that's four 10s.

Upgrading Windows 8.1 to 10 was a strange experience. First, the Windows 10 notification icon never showed up. Looking at the Event Logs of GWX.exe ("Get Windows 10", I guess), it kept crashing with "data is invalid" for the past few months. Yet, the same logs showed clearly that my license was valid and ready to be upgraded to 10. Luckily, Microsoft now offers the Windows 10 ISO download, and the software used to download and "burn" a USB key also allowed for an in-place upgrade with no need for a USB key or DVD.

Yet, after the upgrade, I noticed that all network connections were disabled. Yes, the Boot Camp drivers were installed correctly, and Windows insisted the drivers were correctly working, but it's as if the entire TCP stack was removed. I tried everything for a few hours, getting lost in regedit, so I gave up and used the option to revert back to Windows 8.1. Once back, it was now worse, with even all keyboards disabled.

Before reverting back to 8.1, I attempted to remove all 3rd-party software that could have an impact on the network, including an old copy of the Cisco VPN client and the Avast anti-virus. The Cisco VPN client refused to be uninstalled for some reason. Back on 8.1, I could easily remove the VPN client (using the on-screen keyboard), but it's as if 8.1 kept trace of the Avast install even though Avast was not there anymore. Luckily, I found the download link to the full offline Avast 2015 installer in the user forums. After doing so, both the keyboard and the network were enabled.

Having learned that VPN and Anti-virus software can break things in Windows 10, I uninstalled all of these, and then upgraded to 10 again. I had to reinstall the Boot Camp drivers for my model of MacBook Pro, and this time everything was working fine. I could restore easily Avast, but the old Cisco VPN driver clearly couldn't work anymore. This isn't a big issue, since I keep a Windows 7 virtual machine for that.

What about using Boot Camp in a virtual machine? Well, there are two workarounds I had to do to make it work with Parallels Desktop. First, Article ID 122808 describes how to patch the file C:\Windows\inf\volume.inf so that Parallels can detect the Windows 10 partition. It just so happens that I already had my copy of Paragon NTFS for Mac, so changing the file when booted in the Mac partition was easy. Then, from Article ID 116582, since I'm using a 64-bit EFI installation of Windows, I had to run their strange bootcamp-enable-efi.sh script. It needs administrator privileges, so I temporarily enabled that on my user account to run it. After all of this, Windows got a bit confused about the product activation, but after a few reboots between native and virtual machine modes, it somehow picked up the activation.

So, what about Windows 10 itself? For me, It worked fine. It isn't a huge upgrade compared to Windows 8.1, but it's more usable in a desktop environment. For Windows 7 users, I would definitively recommend it, maybe after a few months until they fix the remaining bugs. As usual, backing up your files is highly recommended (even if you don't upgrade).

Syndicated 2015-08-02 19:57:31 from Benad's Blog

A Code's First Draft

Incremental software development, or evolutions of it, is now pretty much the standard approach, as we now expect requirements to be changed all the time. But this too easily leads to "over engineering", as since we expect change at all times, we spend too much effort on maximizing the flexibility of the code over any other quality.

I admit that in the past I too fell into the trap of over engineering my code, for the sake of "beautiful design" over functionality, making the code far too unnecessarily difficult to understand. From that experience, I now make incremental design changes more reactively.

Practically, it means that I always make a "first draft" of my code with minimal design, and then, based on that experience, make a second draft with a first draft of the design, all that before the first wave of requirements changes. This is quite different than software prototyping, where the first iteration is expected to be deleted or completely rewritten over time. In my case, most of the code of the first draft remains, but moved and refactored to fit the first design change.

The first code draft is done primarily as a proof of concept that demonstrates feasibility, to reduce future risk as much as possible. That way, regardless of future design or functional changes, at least we have a simple functional version of the code. That first draft could even be used as some "sample pseudo-code" to document the functional mechanism of the code, outside of the design and architectural complexities that are added later on as the software grows. That implies that the first code draft should be so clear and simple that it is (almost fully) self-documented.

Secondarily, it helps in making worthwhile design decisions early. Once you have working code, it's easier to see what design patterns would be useful, and precisely where. You can see in context the costs and benefits of each design pattern, and only those that are worth it are applied as a first design iteration. Once additional features are added or existing one changed some new design decisions may be needed, but if by the time of the second draft you have sound code and design, it will be easier to adapt than if you greedily made inappropriate or unnecessary design choice.

At some point, though, the extra effort in doing design changes on top of purely functional coding changes may be too costly if requirements changes are chaotic or indisciplined. This may be why so many programmers invest in design upfront while they have the chance, dooming the code to over engineering. The software engineers may be the only ones in the software development process that can present (and defend) the impact of endless changes on quality (bad code, inappropriate design, etc.), so over-design may be indicative of greater organizational issues.

Syndicated 2015-06-01 02:18:25 from Benad's Blog

The Mystery of Logitech Wireless Interferences

As I mentioned before, I got a new gaming PC a few months ago. Since it sits below my TV, I also bought with it a new wireless keyboard and mouse, the Logitech K360 and M510, respectively. I'm used to Bluetooth mice and keyboards, but it seems that in the PC world Bluetooth is not as commonplace as in Macs, so the standard is to use some dongle. Luckily, Logitech use a "Unifying Receiver" so that both the keyboard and mouse can share a single USB receiver, freeing an additional port. In addition, the Alienware Alpha has a hidden USB 2.0 port underneath it, which seems to be the ideal place for the dongle and freeing all the external ports.

My luck stopped there though. Playing some first-person shooters, I noticed that the mouse was quite imprecise, and from time to time the keyboard would lag for a second or so. Is that why "PC gaming purists" swear by wired mice and keyboards? I moved the dongle to the back or front USB ports, and the issue remained. As a test, I plugged in my wired Logitech G500 mouse with the help of a ridiculously long 3-meter USB cable, and it seems to have solved that problem. But I remained with this half-working wireless keyboard, and with that USB cable an annoying setup.

I couldn't figure out what was wrong, and willing to absorb the costs, until I found this post on the Logitech forums. Essentially, it doesn't play well with USB 3.0. I'm not talking about issues when you plus it the receiver in a USB 3.0 port, since that would have been a non-issue with the USB 2.0 port I was using underneath the Alpha. Nope. Just the mere presence of a USB 3.0 in the proximity of the receiver creates "significant amount of RF noise in the 2.4GHz band" used by Logitech. To be fair (and they insist on mentioning it), this seems to be a systemic issue with all 2.4GHz devices, and not just Logitech.

So I did a test. I took this really long USB cable and connected the receiver to it, making the receiver sit right next to the mouse and keyboard at the opposite side of the room where the TV and Alpha are located. And that solved the issue. Of course, to avoid that new "USB cable across the room" issue, I used a combination of a short half-meter USB cable and a USB hub with another half-meter cable to place the receiver at the opposite side of the TV cabinet. Again, the interference was removed.

OK, I guess all is fine and my mouse and keyboard are fully functional, but what about those new laptops with USB 3.0 on each port? Oh well, next time I'll stick to Bluetooth.

Syndicated 2015-05-03 21:48:04 from Benad's Blog

Electricity Savings: All Those Blinking Lights

As part of my "spring cleaning", and partly inspired by this "Earth Hour" thing, I did an inventory of all the connected electrical devices around my apartment.

I basically categorized them this way:

  1. Devices that are used all the time and must be connected: Lights, electrical heating, fridge, water heater and so on.
  2. Devices that are seldom used, but cannot be turned off completely or disconnected easily: Oven, washer, dryer, and so on.
  3. Devices that are on all the time, for some reason.
  4. Devices that are used enough to warrant leaving them in "low-power standby mode".
  5. Devices I should turn off completely or disconnect when not used.

While I can't do anything for the devices in categories 1 and 2, other than replacing them, my goal was to move as many devices to either standby or turned off as possible. For example, my "home server PC", a Mac mini, doesn't use much power, but do I really need to have to running all the time? So I programmed it to be in standby, and wake up only during the afternoons on weekdays.

For devices already in standby mode, are they used enough? For example, my Panasonic Blu-Ray player kept being warm, since it remained in standby mode, for what? About 10 seconds of boot time? Since my TV takes that much time to "boot up" anyway, I just need to power on both at the same time, and I'll save all the electricity of keeping it in standby all the time.

I am generally less worried about laptops, tables and other battery-operated mobile devices when they stand in standby. They are already quite energy-efficient, running on batteries or not, especially when not actively used. Still, unplugging them from chargers reduces risks if there's an electrical surcharge in the apartment's wiring.

Syndicated 2015-03-30 20:26:00 from Benad's Blog

Alpha: My First PC

The PC port of Final Fantasy VII that I recently completed was the first of many PC-only games I wanted to play, but queued up because playing PC games is inconvenient. I have a 2011 Mac mini that I can dual-boot in Windows, which is what I mostly used for FF VII, but rebooting was slow, the mini was noisy, and its graphics card simply unable to properly play games made after 2010. I have a late-2013 MacBook Pro, but I keep using it for work, it's inconvenient for playing on a TV, and its graphics card could have been better.

I insisted on using Macs, even for PC games, because "gaming PCs" are just too much trouble. Almost all small-form-factor PCs sacrifice graphics performance for size and quieter fans, including the mini. On the other end, even your average "gaming PC" is expensive, a bulky tower with neon lights and require manual assembly. Here's the thing: I can do all of that without problem, from building a PC server to maintaining Windows Server. But that's what I do at work. It's as if there is not such thing as a "casual gaming PC for your TV". Well, at least until the Alienware Alpha, essentially a small-form-factor gaming PC.

The Alienware Alpha is presented as a kind of video game console. While it runs Windows 8.1, its default user account is running a modified version of XBMC that replaces the Windows desktop, and lets you run Steam in "Big Picture" mode. The entire setup can be done (a bit clumsily) using the provided XBox 360 controller (oddly, with its USB dongle for wireless use). For me, though, I already had my wireless mouse and keyboard (and a USB mouse with a long USB extension of FPS games), because I want to play older PC games made for a mouse and keyboard, so I ultimately disabled that "full screen" account and set up a standard desktop Windows account.

And you have to accept that the Alienware Alpha is a PC that isn't that user-friendly and requires tweaking to play games. For example, the frame rate of "Metro: Last Light" was terrible because it was using outdated nvidia libraries; updating the library files made the game much faster. Or Geometry Wars 3 had terrible lag issues, until you run it in windowed mode or manually edit its settings file. Actually, the simple fact that the Alpha's nvidia card is "too new" to be recognized by older games is enough to force you to tweak all the settings. I'm still curious about dual-booting into SteamOS, a Linux distribution of Steam that has a proper "console feel", though most games I want to play are PC-only or not in Steam in the first place (from GOG, actually).

With all that said, the Alpha is a pretty good PC. I was able to plan all the games at maximum settings at at least 30 frames per second, and much more on games made before 2012. It's well optimized for 1080p, which is less than 4K support from current-gen 3D gaming cards, but is perfect for TV use. The hard drive is slower than my MacBook Pro's SSD, but the 3D card is so much better on the Alpha that I don't mind the extra load time. You can still easily replace the hard drive in the Alpha with a SSD, and you can upgrade pretty much everything else but the motherboard and 3D chip, with detailed service manuals. It has an HDMI passthrough, digital optical audio output, many USB 2 and 3 ports (and even a hidden USB port underneath, perfect for my wireless keyboard dongle). Finally, its price is competitive, meaning absurdly cheap compared to similar specifications from Apple.

What I'm saying is that the Alienware Alpha is a good "entry-level" casual gaming PC for use on a TV, without the hassle of a typical PC tower. That, and I now have a PC. I still feel a bit weird about that.

Syndicated 2015-01-14 00:33:59 from Benad's Blog

The Last Retro Final Fantasy

Going back to my previous post, I'm a bit relieved that Final Fantasy VII didn't live up to its hype. And what hype. When released in 1997, it was backed by the unprecedented weight of Sony making it the flagship game of their first foray in video game consoles. The game was marketed everywhere as a kind of "movie as a game", placing emphasis on the FMVs (part of a $100 million publicity campaign, including television and cinema, for 3 months). For many, Final Fantasy VII was their first video game experience.

Let's step back a bit and look at its predecessor, Final Fantasy VI (named "Final Fantasy III" on Nintendo platforms). Its setting was exactly halfway between "Dungeons and Dragons" style of fantasy and Shinto-style fantasy in the present day. It does so by making its setting a world where magic vanished for a thousand years and the world evolved into a "steampunk" style. It successfully explains, through its story, the source of magic in this world, including deep ethical considerations of its use.

The game presents the story through a large group of characters, without a clear, single "hero", and this is done deliberately so as an important theme later in the game. The dramatic elements are at times mature and dark, yet presented subtly (as if to evade Nintendo's sensibilities), dealing with themes of death and suicide unseen on a kid-friendly game platform before. For years I found the game to dark for my liking, the same way I disliked Zelda: Majora's Mask. The themes in Final Fantasy VI are perfectly integrated with the gameplay, visual art and music. Speaking of which, the game's graphic design and music are masterpieces of their authors, Amano and Uematsu.

But Final Fantasy VI was too weird. Being overly focused on its artistic statements, it doesn't please enough neither Western nor Japanese sensibilities. A cross between steampunk and Dungeons and Dragons, with multiple narratives and realism like Game of Thrones? That's not what kids want? And so with VII they started pandering to their audience, with anime-like effeminate "Japanese Boy Band" characters, over-the-top drama presented with in-your-face imagery that make Evangelion subtle, lots of FMVs and cool characters, and since they won't really like RPGs anyway, let's throw as many mini-games in there as possible.

Over time, they became niche of their own captive market anyway. But mass-market appeal pretty much died out with Final Fantasy: Spirits Within, meaning that people that never played any Final Fantasy are unlikely to even try the latest instalments. Still, the damage was done. A new generation of video game players didn't really cared about gameplay, but more the over-pretentious low quality movie experience that surrounds it. It's style over substance, and even if you focused on the art, it was superficial crap made for teenagers that didn't knew any better. The latest Final Fantasy XV trailer looks like an expensive car ad. Magical realism can only go so far before it becomes ridiculous (Zoolander, the game?).

Essentially, Final Fantasy VII and Sony started a movement that, by the mid-2000s, nearly destroyed the video game industry, temporarily saved by the Wii and morally questionable free-to-play games. Only with the recent raise of retro and indie gaming we are starting to see the market increase again.

All to say that I now hate Final Fantasy VII with a passion. Its predecessor is a timeless masterpiece, and I'm not saying this out of nostalgia or because I was influenced by marketing as a teenager. Final Fantasy VI is the best RPG I can recommend, and is now out on iOS and Android, also 50% off at $8 (Canadian Dollars) until January 5, 2015.

Syndicated 2014-12-31 01:49:20 from Benad's Blog

113 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!