Older blog entries for benad (starting at number 102)

iOS and Android Development Tools

As I previously mentioned, I've started doing some iOS and Android development. While I haven't done any App of reasonable size in either, I've read a few books and developed some "beginner's software in both, namely "Android Programming: The Big Nerd Ranch Guide" and "Learn iOS 7 App Development". I've also delved a little bit deeper with both platforms, specifically Core Animation and Android NDK.

And, being opinionated about everything, I quickly formed an opinion of the development tools for iOS, with Xcode, and Android, with Eclipse ADT and Android Studio.

Xcode for iOS

Xcode does represent well Apple's aesthetics in software: Over-simplified GUI, limited, but if you can live within those limits it is highly efficient. The GUI is hit-and-miss, and it lacks a ton of features you can now expect from modern IDEs, for example refactoring. If you want a more advanced IDE for Objective-C, you may want to look at AppCode from JetBrains, the makers of IntelliJ IDEA.

Its build system is similar to Microsoft Visual Studio, in the sense that it has projects with various settings, and it compiles your code with its own proprietary system. You can use the command-line xcodebuild to build an Xcode project from the command-line. If you have to use Makefiles to build some cross-platform code, you may want to look at MacPorts, or use the xcrun command to look for the compiler tools specific to an SDK (xcrun --sdk iphoneos ...). Of course, you can add custom build steps that run custom scripts that in turn will execute your external Makefiles.

Still, you won't want to leave Xcode much. Everything is well integrated in there, including the "Quick Help" in the side bar, simple packaging and signing of the iOS app packages, running and debugging with an emulator or a real device plugged in with a USB cable, and so on. It is a complete, self-contained environment, with no surprises. Considering that most of it is closed-source and that you'd have no other option if it didn't work well, it is comforting that it is quite stable.

It should be noted that Xcode includes a superbly packaged set of documentation. The quality of the documentation is very high and comes very close to MSDN.

ADT and Android Studio

If I were asked a few years ago to develop a mobile OS, I would have surely done something quite similar to Android. Based of Java (at least, its API, so Dalvik), Linux (but with a simplified user-space API, so Bionic), a bunch of XML files, a hacked version of Eclipse, and so on. This sounds like praise, but it's not, considering how wrong I were and how much I didn't know any better back then.

Let's start with the Eclipse-based Android Developer Tool. Whatever was there on Google's web site was broken out of the box. Its update mechanism had missing update server sources, so updating it would break its Android 4.4 support. Usability is garbage, but that's expected from Eclipse. The emulator is shockingly slow, even on my bleeding-edge Intel Haswell i7 with 16 GB or RAM. Of course, you can use the virtualized Android VM for Intel chips, but the version that was there a few months ago would crash Mac OS X 10.9, and once you fix it with a patch from Intel, it's still twice slower than the iPhone simulator. Oh, and rotations in the Android 4.4 simulator don't work. Go figure.

Of course, I could switch one beta-quality IDE, ADT, to another beta-quality IDEA-based one, Android Studio. It sucks and it's buggy, but just slightly less so than ADT. It insists on converting the integrated build system of ADT to a bunch of equally obscure Gradle plugins. In theory this is more flexible, but editing a Gradle build file is as intuitive as Maven, meaning not at all.

Oh, and if after all those dire warnings that you should not attempt to develop native, non-Java code on Android you still do so with the Native Development Kit (NDK), you'll be slapped in the face with a horrible hacked build system built atop Makefiles (ndk-build). Of course, NDK with its Makefiles hack doesn't integrate well with ADT, and can't seem to be integrated at all with Android Studio. You can also forget about your slightly faster virtualized Intel Android VM: Back to the slow ARM emulator.

Basically, Android development tools suck. Plan ahead a few days of work to set it up.

Hey, I could have been a video game programmer for game consoles, so I should stop complaining about crappy development environments.

Syndicated 2014-05-27 00:27:14 from Benad's Blog

OpenSSL: My Heart is Bleeding

After a week, I think I can comfortably explain what happened with this "heartbleed" OpenSSL bug. Now, everybody make mistakes. Especially programmers. Especially me. But at least my errors didn't create a major security hole in 20% of the Internet. Let's review some basic tenets of Software Engineering:

  1. All code (of minimal size and complexity) has bugs. Less code and complexity (and functionality) means less bugs.
  2. Software should be made resilliant against errors. If it can't, it should at least halt (crash).
  3. Software should be designed for Humans, both the code and user interface.

Out of hubris, excess and arrogance, the OpenSSL developers managed to do the opposite of all of these tenets. To quote Theo de Raadt:

OpenSSL is not developed by a responsible team.

Why? Let's do some investigation.

First, Robin Seggelmann had this idea to add a completely unnecessary "heartbeat" feature to TLS. Looking at the protocol design alone, the simple fact that the size of the payload exists in two different places (TLS itself and Heartbeat) is pretty bad and begs for a security hole. Anyway, tenet one.

Still, Seggelmann went ahead and sent working code a year later, on December 31st, at 11:59 PM, the best time for a code review. Of course, the code is filled with non-descriptive variable names which hide the error in plain sight during the ineffective code review, but given the poor quality of the OpenSSL code, they find this acceptable. That's tenet three.

At this point, you may ask: "Shouldn't most modern malloc implementations minimally protect software against buffer overflows and overreads?" If you did, you are correct. But then, years ago, OpenSSL implemented their own memory allocation scheme. If you try to revert that back to plain malloc, OpenSSL doesn't work anymore because its code has bugs that depends on memory blocks being recycled in LIFO fashion. That's tenet two.

The result is bad, and very, very real. In Canada, nearly a thousand Social Insurance Numbers were leaked. And that doesn't count or even start to imagine how many private keys and information leaked like that over the past two years.

By the way, this kind of mess have been my experience with cryptographic software. The usability problem with cryptography isn't just for end users, but also the code itself. Using single-letter variables in a mathematical context where each variable is described at length may be acceptable, but meaningless variable letters without comments in code isn't. While I don't mind much about such "math code" in data compression, for security this makes the code less likely to be secure. Basically, everybody think that being smart is sufficient for writing good code, so of course they would be offended if a software engineer would recommend writing the code from their specs instead of letting them do it themselves. No wonder the worst code always comes from university "careerists".

Personally, I'd stop using OpenSSL as soon as possible.

Syndicated 2014-04-15 01:29:07 from Benad's Blog

A Week with Android

Given my reputation as a "Mac Guy", one would expect me to be ignorant of other platforms. It's actually quite the opposite since, at work, I am a well-experienced Windows and Linux developer. I've only recently started programming on iOS, so I thought it would be a good idea to also learn to develop on the Android platform. While not strictly necessary for software development, I bought an Android phone, to get used to its environment and idioms. To get the best possible experience of the platform, I bought a Nexus 5, which I received by mail a little bit over a week ago. Already, I observed a few differences with iOS beyond the typical "iOS versus Android" feature comparisons.

I quickly noticed how much "Android versus iOS" has parallels with "PC versus Mac" in terms of the software platform. Android is highly customizable, though it's unclear if that was done for the users or to appease the manufacturers and carriers. If I haven't gotten a phone directly from Google, I suspect that buying a typical non-Google, bundled-with-a-plan Android phone would have given me something filled with crapware I couldn't uninstall. Like a PC. That, and given how much havoc rogue Apps can do, I immediately installed an anti-virus, firewall and backup software. Again, like a PC.

Then, there's the screen. It's spectacular. At 445 dpi, it may be the first screen I've ever used where I can't distinguish its pixels at any distance. Cramming a full-sized 1080p screen in 5 inches is amazing. The colours are great too. Still, the screen is physically too large for casual one-hand use. It almost feels like a mini-tablet rather than a phone. Also, its large screen size has an impact on battery life.

Speaking of battery life, when the screen is off, battery life is spectacular. Sure, comparing any new device against the aging battery of a 30-month-old iPhone is unfair. But this Nexus 5 can be casually used for days before fully draining its battery. Oh, and that induction charging is really nice too, compared to fighting with those asymmetrical micro-USB cables… Of course, its battery life depends on well-behaved Apps. As an example, a runaway Songza App leaked a CPU spinlock, causing it to keep the CPU fully powered for hours. Even in this extreme scenario, the battery life could still be compared to my iPhone.

Speaking of music Apps, audio on the Nexus 5 suck. Using the exact same high-quality headphones, I can tell that the headphone jack (or whatever else in the audio chain) is significantly worse than the iPhone. There are many Apps to add equalizers and bass boosters, but even then it still doesn't sound as good. Also, volume controls from my headsets don't work. Well, for now all of my music is in the Apple ecosystem, so I don't mind using my SIM-card-less iPhone as an iPod.

I'll be comparing iOS and Android development once I'm more experienced in both. For now, I'll start getting used to it, both from a user and developer perspective.

Syndicated 2014-04-03 00:25:41 from Benad's Blog

Going Root

So, I've just changed the host for my web site. Not because I was unhappy with the excellent Fused, but because, well, I've outgrown shared hosting. If my web site ought to be some kind of professional image, then at the very least I should own the entire software stack, and not be restricted with whatever versions of Apache or PHP the host decide to rent me.

A decade ago, having a web site meant choosing between shared hosting and "bare metal", but since then VPS (virtual private servers) became a viable option that sits in between. For the past few years I was put off VPS because of the costs, using Amazon EC2 as a reference. Considering that I have to maintain a complete server, paying $50 a month for the privilege to do so was just too much for me.

Luckily, pricing for VPS became quite competitive, especially for a web site as simple as mine. So I tried out RamNode, and after a few weeks of use I trust it enough to move my web site to it, and save a hundred dollars per year in the process.

Migrating to a Debian 7 environment was a bit of a challenge. My background, due to my workplace experience, is mostly with RedHat environments, so it took a little while to adapt to Debian. I really like how lean and fast a well-configured Linux server can be, and having automated security updates on a locked-down stable Debian environment is comforting. I still don't host any dynamic content on my site, but if I ever do, it will be my code, not some 3rd-party PHP script that will become another attack vector.

In the end, nobody will notice, but it really like the idea that I now fully own "my web site", from the IP address to the domain name, down to the version of the web server and operating system, short of the hardware. In a world where most people's online presence is done entirely through social networks that treat your content at their whim, owning a web site is the closest thing to complete free speech one can have. That, and it's a cool thing to add to my résumé.

Syndicated 2014-03-30 20:06:56 from Benad's Blog

The Syncing Problem

Conflicts Everywhere

For the past few months I've been using a simple iOS app to track the episodes I've seen in the TV shows I follow. One of its advertised features is its ability to synchronize itself to all of your iOS devices, in my case my iPhone and iPad. Well, that feature barely works. Series that I would remove from my watch list would show up again the next time I launch the app. Or I would mark an episode as watched and within a few seconds it would show up again.

I can't blame the developer: Apple promised that the iOS Core Data integration with iCloud would make synchronization seamless, and yet until iOS 7 it would regularly corrupt your data silently. Worse, it's over-simplistic API hide the countless pitfalls about its use, and I suspect the developer of my simple TV show tracker fell in every single one of them.

Synchronizing stuff across devices isn't a particularly new problem. Personally, I've never experienced synchronizing a PDA with a dock and some proprietary software, but I've experienced a bit the nightmare of synchronizing my old Motorola RAZR phone with iSync and various online address book services. It wasn't just that the various software interpreted the address book structure differently, causing countless duplicated entries, but also that if I dared modifying my address book from multiple devices before a sync, it would surely create conflicts that I would have to fix everywhere.

Nowadays, synchronization issues still loom over every "cloud-based" service, be if for note-taking like Evernote or seamless file synchronization like Dropbox and countless others. In every case, synchronization conflicts are delegated to an "exceptional scenario" for which the usability is horrendous: If you're lucky, elements are renamed, if not, data gets deleted. In almost every case, you'll have to find back previous versions of your data, while keeping older versions of your data is considered a "premium" feature.

This is frustrating. Data synchronization across electronic devices is not something new, and this is now becoming commonplace. Even with a single client device, the server could use a highly-scalable "NoSQL" database that can make the storage service out-of-sync with itself. Imagine how surprised I was when I learned that the database developer is responsible to handle document conflicts in CouchDB, and that those conflicts would "naturally" happen if the database is replicated across multiple machines! The more distributed the data, be it on client-side devices or on the server side, the more synchronization conflicts will inevitably happen.

And yet, for several years, programmers have been daily using tools that solved this problem...

DVCS

Going back to the example of the TV show tracker, one of the problem with most synchronization systems is that they synchronize documents rather then the user action themselves. If I marked a TV show episode as "watched", then, unless for some reason I marked it as "unwatched" on another device, the show tracker should never come up with the action of "unwatching" an episode.

Similarly, if on one device I mark an episode as "watched", then revert it back to "unwatched", yet on another device that is unaware of those action I completely remove the TV series, then the synchronization should prioritize the latter. Put another way, conflicts are often resolved by taking into context on what state of the data the action was made.

In addition to storing the data model, each device should also track the sequence of actions done on each device. Synchronization would value the user actions first, and then resolve back those actions into the data model. In effect, the devices should be tracking divergeant user action histories.

Yeah, that's what we call "Distributed Version Control Systems". Programmers have been using DVCS like git, Mercurial and many others for over a decade now. If the solution have been a part of our programming tools for so long, why are we still having problems with automated synchronization in our software?

It's a Plain Text World

Sadly, DVCS are made almost solely for plain-text programming source code. Not only that, but marking user actions as "commits" is carefully done manually, and so are resolving synchronization conflicts. Sure, they can support binary files, but they won't be merged like other text files.

As for how DVCS treat a group of files, it went to the complete opposite of older version control systems. In the past, each file had its own set of versions. This is an issue with source code, since usually files have interdependencies that the version control system isn't aware of. Now, with DVCS, a "version" is the state of all the files in the "repository", so changing two different files on two different devices require user intervention as this could cause a conflict from the point of view of those implicit interdependencies. Sure, you can make each file its own repository with clever use of "sub-repositories", but the overhead of doing so in most DVCS makes this impractical.

For individual text files, DVCS (and most VCS in general) treat them as a sequence of text lines. A line of text is considered atomic, and computed deltas between two text files will tend to promote large chunks of sequential line modifications over individual changes. While this may work fine for most programming languages, a few cases cause issues. As an example, in C, it is common practice to indent lines with space characters to make the code more readable, and to increase indentation for each increasing level on nesting. Changing the code a little bit could affect the nesting level of some code block, and by good practice, the indentation level. The result, from a DVCS perspective, is that the entire code block changed because every line had space characters prepended, even if in reality the code semantic is identical for that block.

Basically, DVCS are really made only for plain text documents. And yet, with some effort, some of their underlying design could be used to build a generic and automated synchronization system.

The Long and Snowy Road

Having a version control system on data structures made by software rather than plain text files made by humans isn't particularly new. Microsoft Word does it internally. Photoshop also, through it's "infinite undo" system.

Computing the difference between two file structures is surely something that was solved ages ago by computer scientists. In the end, if each file is just a combination of well-known structures like lists, sets, keyed sets and so on, then one could easily make a generic file difference engine. Plain text files, viewed as simple lists of lines, or even to a certain extent binary files, viewed as lists of bytes, could easily fit in such an engine.

There would be a great effort required to change any existing "programmer's DVCS" into a generic synchronization engine. Everything is oriented towards user interaction, source code text files, deltas represented as the result of a diff command, a single history graph for all the files in the repository, and so on. But on the other way, a generic DVCS can be used as a basis for a programmer's DVCS without too much issue. It would be even quite amazing for it to version control not only the source code, but also the source annotated with compiler output, so that the conflict resolution system would be dealing with the semantics of the source code and not just plain text.

There are a few novel issues that are specific to device synchronization that would affect a generic DVCS, for example having a mechanism to safely prune old data history that is not needed anymore because all devices are up-to-date (to a known point). In fact, unlike existing DVCS, maybe not all devices need to support peer-to-peer synchronization.

But implementing a generic DVCS for automated synchronization do feel like reinventing the wheel. At minimum, things that were already implemented in today's DVCS have to be redone in almost the same way, but with a few important changes to make them work on generic data structures.

On top of that, I have the nagging feeling somebody, somewhere already did this. Or maybe some asshole patented it and then nobody can implement it anymore for fear of being sued into oblivion. This isn't a novel idea by that much (so it shouldn't be patented anyway). Many already considered backing up their entire computers with version control meant for source code, and while impractical, it kind of worked. The extra step of integrating that into a software's data model serialization isn't a big mental leap either. Apple could have done it with Core Data on iCloud, but, like everybody else, synchronization conflicts was an afterthought.

Right now, if I had to write some software that synchronize data with other devices, there would be no generic solution I could use that would implement a generic DVCS, so I would surely do my best to ignore the hard problem of data conflicts until enough users complain. Or maybe by then I'll get fed up and write my own generic DVCS, if nobody else did it first.

Note to self: I may want to post this as an article on my web site.

Syndicated 2014-02-17 00:00:49 from Benad's Blog

Back to Objective-C

In lieu of the usual annual resolutions, this year I made use of some of those vacation days to plan for what programming languages I will learn in the following months. In the past few years, I learned Python, Erlang and D, but sadly I couldn't come up with any excuse to use them in a programming project, be it at home or at work.

A factor that I glossed over when choosing a programming language is its usefulness in the "work market", be it my current work, seeking a new job, or even self-employment. Another factor is what other non-programmers expected out of me. And, of course, being a user of many Apple products, people wonder when they'll see my iPhone App.

I did learn a little bit of Objective-C about a decade ago, at the beginnings of Mac OS X. I did some very basic "Cocoa" desktop development, though back then the programming environment for it was more NextStep's than Apple's. Since then, I focused so much on UNIX, Linux and server-side development that it felt weird to be using a Mac yet forgot how to write desktop software for it.

Unlike Erlang and D, there are surely tons of books for iOS development, and a few for Mac. I will have a shock using the latest XCode, but I can deal with that. While Mac development is unrestricted, my greatest worry is about iOS', with its restrictive API and closed App Store. Do I need to have a corporation to publish an App? Will it be restricted to the Canadian store? If I want to place ads in the App, how do I deal with the revenue?

Of course, I could just do stuff "in-house", be it with their method to deploy Apps for testing purposes, or by using a jailbroken device. That might be preferable in my case, since I'm not doing this to make money with a "crazy idea" but simply for practice. Still, there are some fees ($100 a year?).

But there is the greater issue that developing for iOS feels like developing for the Java virtual machine, with none of the benefits of the virtual machine. I always liked the idea of developing as close to the OS' kernel as possible, and not be at the mercy of some restrictive APIs. Over the years, I became closer to a Linux developer than Windows, Mac, Android or iOS. In an ideal world, I guess I'd be an open-source developer.

Still, I think it's a good time to jump back in Mac and iOS development. The Mac Cocoa APIs surely matured in the past decade, without the cruft of the "Carbon" API from its Pascal days and the cross-compilation compatibilities with PowerPC. As for iOS, the new visual style of iOS 7 is perfect for a drawing-inept developer like me, as all I need is good and clean spacing of tasteful fonts in hand-picked colours, and not the high-resolution pixellated Photoshop mess of the past few years.

As for what software I'll write, I don't know. Very likely it will be a wholly unoriginal idea that I will do for myself first because the currently existing solutions either do it poorly or not to my taste. Then release them as free and add that to a proper "software portfolio". Compared to the crazy challenges I've had in the past few years, iOS development shouldn't be that difficult…

Syndicated 2014-01-05 19:48:27 from Benad's Blog

Exceptional Exceptions

When I learned the C++ programming language in 1997, I assumed that exception handling was a completely optional language feature, since it was presented in the latter chapters of the C++ books, and up to that chapter none of the code examples would use exceptions. I was wrong, of course, but this mentality is what partly caused my biggest programming mistake of 2013.

Exception handling is a critical language feature in programming that you simply can't ignore, for at least two reasons.

First, in languages compiled for a run-time virtual machine, exceptions are part of the execution environment. In "system programming compiled languages" like C, C++ or D, many operations, like dereferencing NULL references or dividing an integer by 0, cause the operating system to throw a signal to the process that makes it crash. In VMs, they would cause an exception to be thrown at that point in the process. But then, in all VM-based languages I've seen, simply not attempting to catch any of these VM-level exception would cause the default exception handler to crash the process, which is equivalent to system languages.

Second, exceptions are "implicit", meaning they can be thrown from anywhere without warning. Sure, in Java "normal" exceptions are part of the function signatures, forcing exception handling to be handled properly for these at compilation time, but run-time exceptions can still happen. In C++, C# and most other modern programming languages, function signatures don't carry exception types as part of their interfaces.

This would be fine if all exceptions were "bad" to the extent that they would warrant crashing the entire process. But that's not what happened over the years. In Object-Oriented design, object constructors would often be used to perform both the object creation and its initialization to a fully working state. While languages guarantees object creation under all circumstances (actually, before the constructor function is called), initialization can fail. Since the only functional interface of the constructor is to return an object instance, the only other obvious way to report an initialization error is to throw an exception. (Of course, the object could remain in an "uninitialized" state and return an "uninitialized" error for every other function call, but most developers are too lazy to do that.)

For objects that wrap system resources, initialization can fail under normal execution. For example, an object that wraps a file could produce an error "file not found". Personally, I don't think the whole process should crash for something like that. But then, if opening a file is part of the object's constructor, then throwing an exception is the de-facto way of reporting that system error.

The lack of "finalize" statements in C++ kind of forces "Resource Acquisition Is Initialization" design for objects that wrap system resources, which in turns forces reference counting, and so on. It is typical that in most C++ software and libraries, exceptions are avoided as much as possible. Personally, I just avoided C++ altogether.

But for C# and .NET, the result is nearly catastrophic: Anything can and will throw exceptions, without any compilation warning, and not handling those exceptions will crash the entire process. And not just "exceptional exceptions" that ought to crash the process, but for simple things like attempting to open a file that doesn't exist.

Sure, you can add "try / catch" blocks everywhere, but things can get complicated if your code doesn't own all the execution threads. Combine COM objects, callbacks from foreign DLLs and using delegates with WPF, and now there is no simple "catch-all" solution to exception handling. Sure, there are last-chance exception handlers like Application.DispatcherUnhandledException and AppDomain.UnhandledException, but when was the last time you saw any book or sample code using these? So, of course, by ignorance I didn't set up those exception handlers in my code before it was too late.

My point is that exceptions should be avoided in programming interfaces as much as possible. Separating object creation and initialization would be a good step in that direction. Also, throwing exceptions should be reserved for "fatal" run-time errors that normally would be impossible to recover from and ought to crash the entire process.

As for the "other" part of my "biggest programming mistake", it's all about what various versions of the Windows kernel do when a process crashes. Coming from UNIX with its simple "fork / exec" and signal handling, what Windows does is just plain disgusting. But then, it's Windows, so even if Windows 7 is "less crap", I don't know what I expected from its programming environment when you look under its rug…

Syndicated 2013-12-29 19:30:01 from Benad's Blog

Cutting the Cord

Today, I've "cut the cord" of cable. Litterally. With cissors. For some reason, they used some kind of security bracket thing on their cable modem, and they instructed me to actually cut the cable wire a foot below it. I also don't have a landline phone anymore. I now have only DSL Internet access and a cellphone. Apart from my ridiculously outdated voice plan (100 minutes, compensated by a 6 GB data plan), I'm now connected only with Internet connections.

It's not like I'm going to miss cable TV, as I previously mentioned here and here. And it's not like I'm saving that much money. Yet, it's for the principle that I shouldn't pay for something I haven't used in over half a year.

More than these, it's not paying a cent anymore to this incestuous duopoly of Bell and Quebecor. Both have managed to siphon off all media in Québec so thoroughly and cheaply that they make Radio-Canada's propaganda almost acceptable compared to the rampant corporatism of the duopoly. Having seen my popular culture turned into government or corporate ass-kissing, no wonder why I don't bother with anything that's not "indie", be it made in Québec or anywhere else. The contrast between our artists' talent and the sheer cheapness of our local corporate media speaks volume. At least in the past TV was kétaine ( English ) yet sincere, now they're just cheap liars.

All that is now a thing of the past. My TV is now simply a screen to the vast world of Internet video, and that's more than enough.

Syndicated 2013-12-03 03:28:38 from Benad's Blog

Your Passwords Suck

I'm sorry, but the password you remember sucks. It's not your fault, it's just the way our brain works. Our brains love finding and remembering patterns. If you can reuse the same password for multiple things and web sites, you would prefer it (and most of you end up doing that). If you made up the password from scratch, then surely it contains some patterns that you (your pattern-oriented brain) like. I can bore you with how selecting a password that's both long and contains special characters contains more entropy and is harder to "crack", but whatever, your password was chosen because it has lower entropy than just a purely random one. Even if your were offered multiple computer-generated passwords, you would select one that's "easier to remember", which is wrong. Oh, and if you do remember a truly, painfully random and long password, it took you so much effort to remember it that you will be more likely to either reuse it, or divulge it by accident or otherwise.

The best password is the one you can't remember.

So, of course, that means that the compromise was to use a unique, randomly generated and nearly impossible to remember password for everything except a password to your "keychain". That "keychain" is an encrypted file containing all your passwords but protected with a supposedly strong "master password" that you must remember. That's a lot better, but it still shifts the problem by essentially putting all your passwords within a single point of failure.

One dangerous thing about passwords is that, well, even a good password is not secure enough. Password databases are often stolen, and it still doesn't change the fact that a simple faked login form you got from that email could be enough to impersonate you in a web site. So, two-factor authentication is the next step: Your user name is validated against "something you know", your password, and "something you have", typically your cellphone. It could be a "number generator" App, or it could be as simple as getting a code from a SMS.

This leads to the biggest problem with security: Good security tends to be so much less usable that, in the end, users end up using the less secure option. Users see security as an obstacle to "getting access to the thing", so if they can use "12345" as a password, they'll do, and if they can't, they'll be annoyed. So, to convince users to set up two-factor authentication won't appeal to much apart to a few paranoid and security freaks.

That's why I find SQRL so interesting. The SQRL fan-made introduction page will surely explain it better than I can using text alone, but suffice to say that logging in to a web site becomes as simple as unlocking the SQRL App and taking a photo of the web page.

A few interesting things should be noticed about SQRL. First, instead of communicating a password to the web site, the SQRL App communicates with a huge 256-bit master key, which is equivalent to a password of over 60 characters. Also, this master key was safely randomly generated by a computer, for computers. This is very similar to public key cryptographic tools like PGP and SSH, that store private keys in a password-protected keychain.

Now, there are two potential issues. First, how do you "backup" that master key? Writing it down would be quite painful, like those old NES games that would "save" your game by having you write down a 40-character code. Your backup could be printed, but then, who prints from their cellphones? The safest bet would be to save it in picture form and backed up with your other (non-public) photos, but of course encrypted with your password.

But that leads to the second issue: What if you forget your password, or if somebody steals your master key and its password? Well, it shows you need three-factor authentication: Something you know, something you have, and something you are, meaning your identity. Typically, it resolves to those highly-insecure "security questions" or "password hints", which are worse than almost any bad password. But in the case of SQRL, it means making a backup of a "identity unlock lock" file that can be used to revoke an insecure (stolen or forgotten) master key. This time, to close the loop, the identity unlock lock key is encrypted with a randomly-generated password, a painful 20 or some character sequence that you must write down and never, ever place online or in the same place as the identity lock key. Annoying, but then if you've ever dealt with FileVault on Mac OS X or entering activation codes in Windows, it's not that bad if done only once.

Hopefully, I just scared you into changing all your passwords right now. Don't worry, new things like SQRL will make it easier for you. Until then, I recommend LastPass (or KeyPassX), using two-factor authentication everywhere it is offered, and filling those security questions with non-sensical garbage (that can be spoken over the phone) that will be stored in an encrypted file you regularly back up with a copy of your encrypted password database. That sucks, but, hey, not as badly as your passwords.

Syndicated 2013-11-26 01:45:45 from Benad's Blog

Revision A

When, over three years ago, I bought a MacBook Pro with a 256 GB SSD and 8 GB of RAM, it was a bit difficult to justify the expense. The Windows 7 partition that I would run in VMware Fusion was only 40 GB and was mainly used for Microsoft Office. But over the years my development shifted from UNIX-style to Windows, and I ended up not only lacking disk space but also needing so much RAM that I spent weeks booted in the Windows partition. I expanded the partition to 60 GB (a painfully difficult process), but with Windows insisting that you can roll back every single security patch, and after installing no less than three different versions of Visual Studio, I kept running out of disk space again all the time. The few times I would run the Windows partition in VMware, allocating the much-needed 4 GB of RAM to Windows made the Mac side swap memory non-stop.

So, I'm now with a new Haswell-type MacBook Pro, with twice the disk and RAM. It's now cheaper, and I lost in the process an optical drive, an expansion slot, a USB port, a hole for Kingston locks, a few pounds and 2" of screen size. But then I gained an HDMI port, USB 3, an SD card slot, a thunderbolt port, the best computer screen I've even seen in my life, and of course 8 times the overall performance.

For securing the laptop, I had to buy a lock bracket from Maclocks, and it works well. It adds a millimetre or two at the bottom back, but I don't notice it anymore, and the elevation helps cooling the laptop.

Jumping in a "revision A" of Haswell-type computers from Apple could be quite a risk. Combined with the first release of Mac OS X 10.9 and Windows 8.1, things ought to be slightly broken for a little while. Indeed, there are a lot of people having trouble installing Windows 8 or 8.1 on those new MacBook Pros, as you can see in various discussion threads. I'm not in a rush in making my move to Windows 8.1 and Office 2013 since I've moved my previous Windows 7 partition to a "pure" virtual machine. While I didn't have any memory issue running Windows 7 in a VM, there were issues attaching my USB 3 external hard drive to the VM, issues that were resolved with an old USB 2 extension cable.

I haven't had the occasion to play a game to test the discrete graphics card, the last one Apple sells in a laptop, actually, but this new laptop is a work machine. Already I can tell this is a machine that can handle two or three VMs at the same time, and even if it were swapping memory the SSD is just that much faster that I wouldn't notice any difference. The difference in weight, the quality of the screen and the sheer speed of it will make a substantial difference in my work.

Unlike Dell laptops of Levovo ThinkPads, MacBook Pros still feel like a luxury item. Maybe its physical construction or its price is a self-fulfilling prophesy. Yet I was still annoyed when I overheard rich people at the Apple Store buying a nearly $3000 laptop just for the "Apple experience". From somebody that invest a lot of money and setup time on his primary work tool, all I saw was the futility and the wasted potential in this computer bought by a whim from this rich person. For me, having access to a computer as a kid was a turning point. For them, it's an expression of a "lifestyle". I compile and debug large amount of code in multiple operating systems. They prefer using Facebook using a high-resolution display. And Apple revels in that luxury consumerism.

I shouldn't complain. I don't buy electronics to brag about them. If consumerism ultimately brought down the price of the now-shrinking PC market, then I ended up paying less and being more productive. A for something that was cheaper than my previous laptop, damn it's an amazing machine.

Syndicated 2013-11-11 00:59:22 from Benad's Blog

93 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!