Older blog entries for benad (starting at number 115)

Electricity Savings: All Those Blinking Lights

As part of my "spring cleaning", and partly inspired by this "Earth Hour" thing, I did an inventory of all the connected electrical devices around my apartment.

I basically categorized them this way:

  1. Devices that are used all the time and must be connected: Lights, electrical heating, fridge, water heater and so on.
  2. Devices that are seldom used, but cannot be turned off completely or disconnected easily: Oven, washer, dryer, and so on.
  3. Devices that are on all the time, for some reason.
  4. Devices that are used enough to warrant leaving them in "low-power standby mode".
  5. Devices I should turn off completely or disconnect when not used.

While I can't do anything for the devices in categories 1 and 2, other than replacing them, my goal was to move as many devices to either standby or turned off as possible. For example, my "home server PC", a Mac mini, doesn't use much power, but do I really need to have to running all the time? So I programmed it to be in standby, and wake up only during the afternoons on weekdays.

For devices already in standby mode, are they used enough? For example, my Panasonic Blu-Ray player kept being warm, since it remained in standby mode, for what? About 10 seconds of boot time? Since my TV takes that much time to "boot up" anyway, I just need to power on both at the same time, and I'll save all the electricity of keeping it in standby all the time.

I am generally less worried about laptops, tables and other battery-operated mobile devices when they stand in standby. They are already quite energy-efficient, running on batteries or not, especially when not actively used. Still, unplugging them from chargers reduces risks if there's an electrical surcharge in the apartment's wiring.

Syndicated 2015-03-30 20:26:00 from Benad's Blog

Alpha: My First PC

The PC port of Final Fantasy VII that I recently completed was the first of many PC-only games I wanted to play, but queued up because playing PC games is inconvenient. I have a 2011 Mac mini that I can dual-boot in Windows, which is what I mostly used for FF VII, but rebooting was slow, the mini was noisy, and its graphics card simply unable to properly play games made after 2010. I have a late-2013 MacBook Pro, but I keep using it for work, it's inconvenient for playing on a TV, and its graphics card could have been better.

I insisted on using Macs, even for PC games, because "gaming PCs" are just too much trouble. Almost all small-form-factor PCs sacrifice graphics performance for size and quieter fans, including the mini. On the other end, even your average "gaming PC" is expensive, a bulky tower with neon lights and require manual assembly. Here's the thing: I can do all of that without problem, from building a PC server to maintaining Windows Server. But that's what I do at work. It's as if there is not such thing as a "casual gaming PC for your TV". Well, at least until the Alienware Alpha, essentially a small-form-factor gaming PC.

The Alienware Alpha is presented as a kind of video game console. While it runs Windows 8.1, its default user account is running a modified version of XBMC that replaces the Windows desktop, and lets you run Steam in "Big Picture" mode. The entire setup can be done (a bit clumsily) using the provided XBox 360 controller (oddly, with its USB dongle for wireless use). For me, though, I already had my wireless mouse and keyboard (and a USB mouse with a long USB extension of FPS games), because I want to play older PC games made for a mouse and keyboard, so I ultimately disabled that "full screen" account and set up a standard desktop Windows account.

And you have to accept that the Alienware Alpha is a PC that isn't that user-friendly and requires tweaking to play games. For example, the frame rate of "Metro: Last Light" was terrible because it was using outdated nvidia libraries; updating the library files made the game much faster. Or Geometry Wars 3 had terrible lag issues, until you run it in windowed mode or manually edit its settings file. Actually, the simple fact that the Alpha's nvidia card is "too new" to be recognized by older games is enough to force you to tweak all the settings. I'm still curious about dual-booting into SteamOS, a Linux distribution of Steam that has a proper "console feel", though most games I want to play are PC-only or not in Steam in the first place (from GOG, actually).

With all that said, the Alpha is a pretty good PC. I was able to plan all the games at maximum settings at at least 30 frames per second, and much more on games made before 2012. It's well optimized for 1080p, which is less than 4K support from current-gen 3D gaming cards, but is perfect for TV use. The hard drive is slower than my MacBook Pro's SSD, but the 3D card is so much better on the Alpha that I don't mind the extra load time. You can still easily replace the hard drive in the Alpha with a SSD, and you can upgrade pretty much everything else but the motherboard and 3D chip, with detailed service manuals. It has an HDMI passthrough, digital optical audio output, many USB 2 and 3 ports (and even a hidden USB port underneath, perfect for my wireless keyboard dongle). Finally, its price is competitive, meaning absurdly cheap compared to similar specifications from Apple.

What I'm saying is that the Alienware Alpha is a good "entry-level" casual gaming PC for use on a TV, without the hassle of a typical PC tower. That, and I now have a PC. I still feel a bit weird about that.

Syndicated 2015-01-14 00:33:59 from Benad's Blog

The Last Retro Final Fantasy

Going back to my previous post, I'm a bit relieved that Final Fantasy VII didn't live up to its hype. And what hype. When released in 1997, it was backed by the unprecedented weight of Sony making it the flagship game of their first foray in video game consoles. The game was marketed everywhere as a kind of "movie as a game", placing emphasis on the FMVs (part of a $100 million publicity campaign, including television and cinema, for 3 months). For many, Final Fantasy VII was their first video game experience.

Let's step back a bit and look at its predecessor, Final Fantasy VI (named "Final Fantasy III" on Nintendo platforms). Its setting was exactly halfway between "Dungeons and Dragons" style of fantasy and Shinto-style fantasy in the present day. It does so by making its setting a world where magic vanished for a thousand years and the world evolved into a "steampunk" style. It successfully explains, through its story, the source of magic in this world, including deep ethical considerations of its use.

The game presents the story through a large group of characters, without a clear, single "hero", and this is done deliberately so as an important theme later in the game. The dramatic elements are at times mature and dark, yet presented subtly (as if to evade Nintendo's sensibilities), dealing with themes of death and suicide unseen on a kid-friendly game platform before. For years I found the game to dark for my liking, the same way I disliked Zelda: Majora's Mask. The themes in Final Fantasy VI are perfectly integrated with the gameplay, visual art and music. Speaking of which, the game's graphic design and music are masterpieces of their authors, Amano and Uematsu.

But Final Fantasy VI was too weird. Being overly focused on its artistic statements, it doesn't please enough neither Western nor Japanese sensibilities. A cross between steampunk and Dungeons and Dragons, with multiple narratives and realism like Game of Thrones? That's not what kids want? And so with VII they started pandering to their audience, with anime-like effeminate "Japanese Boy Band" characters, over-the-top drama presented with in-your-face imagery that make Evangelion subtle, lots of FMVs and cool characters, and since they won't really like RPGs anyway, let's throw as many mini-games in there as possible.

Over time, they became niche of their own captive market anyway. But mass-market appeal pretty much died out with Final Fantasy: Spirits Within, meaning that people that never played any Final Fantasy are unlikely to even try the latest instalments. Still, the damage was done. A new generation of video game players didn't really cared about gameplay, but more the over-pretentious low quality movie experience that surrounds it. It's style over substance, and even if you focused on the art, it was superficial crap made for teenagers that didn't knew any better. The latest Final Fantasy XV trailer looks like an expensive car ad. Magical realism can only go so far before it becomes ridiculous (Zoolander, the game?).

Essentially, Final Fantasy VII and Sony started a movement that, by the mid-2000s, nearly destroyed the video game industry, temporarily saved by the Wii and morally questionable free-to-play games. Only with the recent raise of retro and indie gaming we are starting to see the market increase again.

All to say that I now hate Final Fantasy VII with a passion. Its predecessor is a timeless masterpiece, and I'm not saying this out of nostalgia or because I was influenced by marketing as a teenager. Final Fantasy VI is the best RPG I can recommend, and is now out on iOS and Android, also 50% off at $8 (Canadian Dollars) until January 5, 2015.

Syndicated 2014-12-31 01:49:20 from Benad's Blog

Final Fantasy VII, a Late Review

As I mentioned in a previous post, I started playing the game Final Fantasy VII so that I can, as objectively as possible, review it. I reserved my judgement until I complete the game, and 6 months later, or about 60 hours of game play, I finished it.

To be as fair as possible, I won't compare it to any of its predecessors in the series or its contemporaries in the genre, to see if it can stand on its own merits. I'll be lenient to what could have been caused by the technical limitations of the time (PlayStation 1), and even the problems introduced in the PC port of the game.

Story

Since the game makes its story front and centre of the experience, I'll start here.

It is difficult to summarize the story succinctly. On the one hand, it is a story of "eco-terrorists" that attempt to prevent this "evil corporation" from siphoning the "life energy" of the planet into power plants, for evil reasons. This is the same energy that is the source, in concentrated jewel form, of magical powers in the game, called "material". Of course, ecology + modern electrical technology + power plants + Japan = Godzilla. Also, for some evil reasons, the "bad guy" attempts to crash a comet on the planet so that he can harness all of that "life energy" to destroy the world or something.

On the other hand, it's the story of the immature man-children that compose that team of "rebels". It is mostly focused on the placeholder hero, Cloud, with a bad case of amnesia for anything other than his hate for the "bad guy".

It's bad. The characters are wholly unlikable, or laughably generic ("Aerith the flower girl" is an actual main character name in the game). The "amnesia" thing, which lasts almost the entire game, feels like a desperate mean to fill in plot holes in an otherwise uninteresting story. Character motivations are paper thin and selfish, surprising since that whole "comet will soon destroy all life" would have implied that the motivation could have been as simple as "saving the world", but no, it's only about revenge and selfish personal reasons. Twists and turns in the story are either caused by the characters complete emotional immaturity, or Deus Ex Machina that makes the plot of "Lost" well planned in comparison.

In any other medium except anime this story would be considered bad. But then, maybe this game is just some kind of anime with some RPG elements slapped on top of it.

Design

This game tries its hardest to mesh modern-day technology with fantasy elements, and it simply doesn't mix well. It also doesn't make sense why there could be such a world with modern warfare weaponry (automatic rifles, tanks, helicopters, planes) while magical items that allow anyone to perform magic is so commonplace. It's like Blade Runner with Japanese mystical elements of spirits and magic. It sounds really cool, yet this game manages to make it not work at all.

Oh, and the game never attempts to explain the impractically oversized swords of the hero, especially in a world with guns and magical powers.

Music and Sound

The sound elements are pretty bad in general.

The music has a few tracks that are quite good and memorable, but the rest of the time it's mostly a collection of fillers, even with repetition.

Gameplay

Let's start with the controls. In a battle, the camera's spinning makes it a challenge to properly target enemies. On the map, it feels like you're in a maze of invisible walls, with some that "slide along", and others that stop you in your tracks. Camera angles frequently change from one area to another, with little consistency in the controller directions, making simple movement difficult.

Before looking the RPG elements proper, let's look at the other "games": the puzzles, the "quick time events", and the mini-games. The puzzles are either too obtuse or easy. The "quick time events" are never good, in this game or elsewhere. The mini-games are completely different games than RPG, mostly racing, inserted into this game for some reason. You're forced to play each mini-game at least once, and each is horrible and would not stand on their own, so should be avoided as much as possible.

The RPG proper is fine, but not great. It is based around a "materia" system, those items that enable its wearer to perform magic. The materia items are placed into sockets in the weapon and armour of each character. Each materia has its own experience points, and when their reach higher levels they allow its wearer to perform more powerful attacks or at the highest level "spawn" a copy at level 1. Some materia can be paired with others to perform modifications.

Having only 3 characters in battles seems limited, and makes everything unbalanced if you lose a single one. This makes the game either a highly defensive one, or one when you want to defeat the opponent as quickly and safely as possible. You get a roster of up to 9 characters, but most of the time one of the 3 character choice is locked down to the "hero" Cloud, and there is little incentive to level up all the characters. At any rate, the characters are mostly interchangeable since their base stats and unique weapons are easily overwhelmed by the effects of the materia items. Given the emphasis on materia, setting up sets of materia on your characters takes a lot of time.

Speaking of time, the battles are too slow. Each attack takes several seconds to execute, and special attacks can take up to a minute. I avoided using the "summon materia" special attacks for that reason. Real-clock time became an important resource while playing this game, making "grinding" to level up characters take too much time.

Generally, the game as a lot of depth, but not enough to justify the 60+ hours of game time. Most of that time was stretched out from unnecessary long battle animations and unskippable cut scenes.

Conclusion

The game is fine, but not great. There are too many flaws and annoyances to merit playing it to the end. Its large budget is quite visible on screen, it has a lot of depth, but what was lacking was fun. The story was bad, the characters cliché and unlikeable, the battles slow, the controls poor, the mini-games horrendous, all atop of an average RPG.

Syndicated 2014-12-29 16:40:04 from Benad's Blog

Modern MVC in the Browser

A few months ago, I started learning the AngularJS framework for web development. It is a quite clever framework that allows you to "augment" your HTML with markup to map placeholders with elements in your object model. It reminded me of JSP and markup-based SWING frameworks I used a decade ago. Since then, though, I realized that this template-based approach has quite a few limitations that bother me. First, as it is based on templates, it doesn't handle well highly dynamic elements that generate markup based on context. Second, complex, many-to-many mappings between the view and the model are difficult to express. For example, it is difficult to express a value placeholder that is the sum of multiple elements in the models without having to resort calling a custom function, or having the change in the view trigger the corresponding change in multiple locations in the model without, again, resorting to a custom function. Third, it is a framework that needs to keep track of the model/view mapping, so it is all-encompassing and heavy in configuration.

So, let's break down the problem and look individually at the view, model and controller as separate modules, and see if there could be a better, more "modern" approach.

The biggest problem with the "model" in JavaScript is that it lacks safely hiding properties as functions, as done natively in C# for example, or manually through "getter" and "setter" functions in JavaBeans. Because of that, it becomes difficult to take any normal object and add a layer on top of it that would implement an observer pattern automatically. Instead, you can use something like the model implementation of Backbone.js that fully hides the model behind a get/set interface that can automatically trigger change events to listener objects. It may not be elegant, but it works well.

For the "view", HTML development doesn't support well the concept of a "custom control" or "custom widget" in other GUIs. The view is mostly static, and JavaScript can change the DOM to some extent, and at high cost. In contrast, other GUI systems are based around rendering controls ("GUI controls", not to be confused with the controller in MVC) on a canvas, and the compositing engine takes care of rendering on screen only the visible and changed elements. A major advantage of that approach is that each control can render in any way it pleases without having to be aware of the lower-level rendering engine, be it pixels, vectors or HTML. You could simulate a full refresh of the DOM each time something (model or controller) changes the GUI in HTML, but the performance would be abysmal. Hence, the React library from Facebook and Instagram, which supports render-based custom controls but using a "virtual DOM", akin to "bitmasks" in traditional GUIs, so that only effective DOM changes are applied.

Finally, for the "controller", the biggest issue is how to have both the model and the view update each other automatically, on top of the usual business logic in the controller, without creating cyclic loops. React's approach, named Flux, is a design pattern where you always let events flow from the model to the view, and never (directly) in the other direction. My gripe with that is that it is merely a design pattern that cannot be enforced. This reminds me of the dangers of using the WPF threading model as a means to avoid concurrency issues: Forget to use the event dispatcher a single time, and you will create highly difficult to debug crashes. A new approach, called functional reactive programming is kind of like what dependency injection did for module integration, but for events. Essentially it is a functional way of manipulating channels of events between producers and consumers outside of the code of each producer and consumer. This may sound quite an overhead compared to the inline and prescriptive approach of Flux, but as soon as you build a web page heavy on asynchronous callbacks and events coming from outside the page, having all of that "glue" in a single location is a great benefit. An implementation of FRP for JavaScript, Bacon.js, has a great example of how FRP greatly reduces those countless nested callbacks that are commonplace in web and Node development.

Combined, Backbone.js, React and Bacon.js offer a compelling alternative to control- and template-heavy browser MVC frameworks, and at minimum prevent you from being "locked-in" a complex and difficult to replace framework.

Syndicated 2014-11-26 01:44:49 from Benad's Blog

A Tale of Two Shells

Last year, I completed "properly" learning the bash ("Bash"?) shell, using a combination of the book "Learning the bash Shell" and reading from start to finish the gigantic "man" page. And that was enough to convince me that, regardless of its ubiquity, I don't like it much, be it as a scripting language or as a command-line shell.

Having already learned tcsh, because it was the default shell on Mac OS X and is still popular, I was ready to try out more modern shells, rather than ones stuck in the 80s.

First, on Windows, DOS is quite archaic and annoying. Simple things like sleeping for a second require unintuitive workarounds. DOS batch scripting is painfully difficult, so I was eager to find something better. And since Windows 7, this strange "PowerShell" is now installed by default, and was heralded as a revolutionary step in command-line shells. Is it?

After reading a few tutorials, including this "free ebook" on powershell.com, things became clear. Windows PowerShell is essentially a shell built atop .NET that manipulates streams of objects rather than plain text across pipes, though thankfully formats the objects as plain text on the console screen by default. It provides many "cmdlets" that can manipulate that stream in a more convenient way than grep and awk. For example, listing all files in a directory greater than a megabyte is trivial in PowerShell, while on UNIX shell requires an awkward (pun intended) combination of extracting character positions and arithmetic. The "PowerShell IDE", also provided with PowerShell, can even perform tab-completion of the fields of the objects out of the current pipe, making it easy to extract their attributes. Because so much of the Windows internals are accessible using COM and .NET, it is easy to perform system administration with it, for example installing system services and querying their status. The only major issue I've found with PowerShell is that executing PowerShell scripts is completely disabled by default until unlocked by an administrator. Also, as a minor gripe, the version of PowerShell that came out of the box with Windows 7 is quite outdated. Overall, if there was a "grand vision" of ".NET" in Windows, it is best represented by PowerShell.

Back on UNIX-like systems, it seems like users are quite happy with old shells, or at least incremental evolutions of the old "Bourne shell" and Berkeley's "C Shell". Looking around, I found "fish", the "Friendly Interactive SHell". I liked its ironic tagline of "Finally, a command line shell for the 90s", since it was initially released in 2005. It is, indeed, friendly, as it has a deliberately limited set of features, and has default out-of-the-box functionality that makes interactive use enjoyable. It was built around a comprehensive design document that explicitly favours usability over compatibility with older or popular shells. The results are spectacular: Everything has colour, TAB and arrow keys completion with a type ahead preview in light grey, inline argument completion for most commands (including "man") that present interactively all the options and their meaning automatically extracted from the "man" pages, editing the configuration through a built-in web service, applying configuration changes to all shells instantly, I could go on. Its scripting is quite limited, but that may be a good thing considering the Shellshock bug (Not that "fish" has no security holes, but at least they're not "as designed").

Personally, I am ready to move to both PowerShell and "fish" for day-to-day use. While they both don't have much in common to older shells, they are far more usable. I highly recommend to all command-line users.

Syndicated 2014-10-28 01:37:41 from Benad's Blog

Moniker, the Security Weak Point

By the time I heard about the "Shellshock bug" security hole in the morning of September 25, already the small Debian Linux server that I built up in Ramnode to host my web site patched that security hole by itself. At any rate, my web site hosts only static pages, so it was never impacted.

While I am in control of the security of my web site starting from the Linux kernel, to the web server, up to the web pages it serves, I'm still dependent on its hosting service (Ramnode) to be secure. Another potential danger would be for someone to hijack the "benad.me" domain name, and make it point to a version of the site filled with malware and viruses. Sadly, this almost happened.

Back in 2008 I registered my domain through the registrar Moniker, which used to be recommended partly based on its security. They implemented additional features that one could pay to "lock" the domain and prevent unauthorized transfers from someone that would steal your user name and password. Since then though, the company was bought by another company, and what is now called Moniker is only by name, both in terms of staffing and software.

I did notice a difference in tone in email communications from the new Moniker. They seemed to be highly focused on domain name auctions, and would automatically auction off expired domains. This felt like a conflict of interest, as Moniker would deride higher profit auctioning off your domain than helping you renewing it. Of course, they would never do that on valuable customers that do "domain speculation" and own a large number of (unused) domains, but still that raises the suspicion that the company was sold based on the number of domains it had and how much money they can extract from large speculators rather than providing valuable customer service.

The "new" Moniker had a security hole in 2013, and to fix that Moniker forced users to change their password the next time they logged in. Note though that this happened with the old version of that web site. This summer, the parent company that bought Moniker (and its name) scrapped the old site's code and replaced it with a new broken, buggy interface. The new interface also brought with it worse security, and made the domain locking feature completely ineffective.

By early October, Moniker sent an email to all its users saying that for untold security reasons, all the account passwords would be reset. The shock was that the email contained both the user names and passwords of all the user's accounts. I was shocked that my old Moniker account, identified by a standard-looking user name, was placed under a parent, numerically-identified user name I've never seen, and another numerical sub-account that was created without my knowledge. It should be noted that I could never access the numerical sub-account, even when using the password provided in the email. Also, the email said that your new passwords must fit security requirements, including the use of at least one "special character", even though the passwords provided in the email didn't contain any special character, and when attempting to change passwords, it would refuse most special characters.

OK, I'm not a security expert, but sending user names and passwords in an email, refusing special characters (which would indicate that they don't use bcrypt), and resetting the passwords of all users may indicate that they were hacked. Badly. Moniker cited the Shellshock bug, but as reports of stolen domains started to appear, a user came forth saying that the security hole predated Shellshock by a month.

So, I was convinced that Moniker had a pattern of behaviour of not taking security seriously, that is until they experience a mass exodus of their customers. I started the process of domain name transfer the day after they announced the password reset, and I would recommend everybody else to do the same. I transferred to Namecheap. Despite its name, in my case the price was the same, though as a test I created a new empty account before the transfer, and already I could attest that they take security seriously, including emails for account activity (using secondary email addresses in rotation) and 2-factor authentication (using SMS for now). I completed the transfer yesterday, so that would explain why there was a little bit of downtime when resolving my domain name.

Syndicated 2014-10-15 01:21:21 from Benad's Blog

Eventual Consistency, Squared

Looking back at my article "The Syncing Problem", implementing a generic DVCS seems like a relatively straightforward solution. Actually, if the "data to sync" was simplified to plain text, existing DVCS like git or Mercurial may be sufficient. But there is a fundamental problem I glossed over that has huge ramifications on the design of the DVCS that make existing DVCS implementations dangerous to use.

In modern "Internet-connected" appliances, there are two storage solutions: On-device, and "in the cloud". It is the "cloud" storage that is going to be used for the devices to communicate to each other indirectly when performing data synchronization. There is though a huge behavioural difference between on-device and cloud storage: The cloud storage is "eventually consistent". Beneath its API, the cloud storage itself may also be distributed across machines, and modified data can take a little while to propagate to other machines. Essentially, if you upload a file from one device, it may take a little while for another device to see the change.

Sadly, whatever conflict resolution used by a cloud storage provide is unreliable, because their behaviour is either undocumented or inconsistent. Locking files on such storage may not be possible either. Worse, internal synchronization issues at the cloud provider may make their internal synchronization speed so inconsistent to make it unreliable as a means to communicate information between devices quickly. Almost all VCS (distributed or not) assume a reliable storage area for the version repository. Hosted VCS guarantee ACID. No DVCS was made to push revision information to an unreliable storage, and use that as the primary means to exchange information.

The easiest solution for this is to design a DVCS that supports "write-only" repositories. If the storage key (file name) contains the checksum of the data it contains, it may be possible to have multiple clients writing to the same storage area changes to the shared repository. Even if listing available "data blocks" is inconsistent on the shared storage, all it can do is augment the "knowledge" of what exists in the repository against the repository stored on local storage. The atomicity of the storage blocks should be as close as possible to the atomicity of a transactional version control delta, especially since the cloud storage may make information appear on other devices out-of-order of how they were written. That could make those "patch files" larger than well-optimized VCS, but on cloud storage we may not have any other option.

Sure, a write-only repository may be a big issue if the files in the version control are too large or if storage is limited, but then most VCS tend to avoid deleting historical data, and when they support "cleaning up a repository", the solutions are clumsy and error-prone. In our case, if a device prematurely deletes older historical data in the shared storage, by being unaware that other devices were synced at older versions and may branch from there, then this would be tantamount to using the share storage to host only the latest version and nothing else. All to say, deleting historical data in a shared, eventually-consistent storage is a difficult problem that may involve a lot of tuning based on how long devices can stay unsynchronized before considering them "lost", compared to how fast the cloud storage is expected to be consistent.

Syndicated 2014-10-04 15:05:19 from Benad's Blog

Client-Side JavaScript Modules

I dislike the JavaScript programming language. I despise Node.js for server-side programming, and people with far more experience than me with both also agree, for example Ted Dziuba in 2011 and more recently Eric Jiang. And while I can easily avoid using Node as a server-side solution, the same cannot be said about avoiding JavaScript altogether.

I recently discovered Atom, a text editor based on a custom build of WebKit, essentially running on HTML, CSS and JavaScript. Though it is far from the fastest text editor out there, it feels like a spiritual successor to my favourite editor, jEdit, but based on modern web technologies rather than Java. The net effect is that Atom seems like the fastest growing text editor, and with its deep integration with Git (it was made by Github), it makes it a breeze to change code.

I noticed a few interesting things that were used to make JavaScript more tolerable in Atom. First, it supports CoffeeScript. Second, it uses Node-like modules.

CoffeeScript was a huge discovery for me. It is essentially a programming language that compiles into JavaScript, and makes JavaScript development more bearable. Its syntax reminds me a bit of the syntax difference between Java and Groovy. There's also the very interesting JavaScript to CoffeeScript converter called js2coffee. I used js2coffee on one of my JavaScript module, and the result was far more readable and manageable.

The problem with CoffeeScript is that you need to integrate its compilation to JavaScript somewhere. It just so happens that its compiler is a command-line JavaScript tool made for Node. A JavaScript equivalent to Makefiles (actually, more like Maven) is called Grunt, and from it you can call the CoffeeScript compiler directly, on top of UglifyJS to make the generated output smaller. All of these tools exist under node_modules/.bin when installed locally using npm, the Node Package Manager.

Also, by writing my module as a Node module (actually, CommonJS), I could also use some dependency management and still deploy it for a web browser's environment using Browserify. I could even go further and integrate it with Jasmine for unit tests, and run them in a GUI-less full-stack browser like PhantomJS, but that's going too far for now, and you're better off reading the Browserify article by Bastian Krol for more information.

It remains that Browserify is kind of a hack that isn't ideal for running JavaScript modules in a browser, as it has to include browser-equivalent functionality that is unique to Node and isn't optimized for high-latency asynchronous loading. A better solution for browser-side JavaScript modules is RequireJS, using the module format AMD. While not all Node modules have an AMD equivalent, the major ones are easily accessible with bower. Interestingly, you can create a module that can as AMD, Node and natively in a browser using the templates called UMD (as in "Universal Module Definition"). Also, RequireJS can support Node modules (that don't use Node-specific functionality) and any other JavaScript library made for browsers, so that you can gain asynchronous loading.

It should be noted that bower, grunt and many command-line JavaScript tools are made for Node and installed locally using npm, So, even if "Node as a JavaScript web server" fails (and it should), using Node as an environment for local JavaScript command-line tools works quite well and could have a great future.

After all is said and done, I now have something that is kind of like Maven, but for JavaScript using Grunt, RequireJS, bower and Jasmine, to download, compile (CoffeeScript), inject and optimize JavaScript modules for deployment. Or you can use something like CodeKit if you prefer a nice GUI. Either way, JavaScript development, for client-side software like Atom, command-line scripts or for the browser, is finally starting to feel reasonable.

Syndicated 2014-08-15 03:14:41 from Benad's Blog

The Case for Complexity

Like clockwork, there is a point in a programmer's career where one realizes that most programming tools suck, that not only they hinder the programmer's productivity, but worse may have an impact on the quality of the product for end users. And so, there are cries of the absurdity of it all, some posit that complex software development tools must exist because some programmers like complexity above productivity, while others long for the days where programming was easier.

I find these reactions amusing. Kind of a middle-life crisis for programmers. Trying to rationalize their careers, most just end up admitting defeat for a professional life of mediocrity, by using dumber tools and hoping to avoid the main reason why programming can be challenging. I went into that "programmer's existential crisis" in my third year as a programmer, just before deciding on making it a career, but I went out of it with what seems to be a conclusion seldom shared by my fellow programmers. To some extent this is why I don't really consider myself a programmer but rather a software designer.

The fundamental issue isn't the fact that software is (seemingly) unnecessarily complex, but rather trying to understand the source of that complexity. Too many programmers assume that programming is based on applied mathematics. Well, it ought to be, but programming as practiced in the industry is quite far from its computer science roots. That deviation isn't due only from programming mistakes, but due to the more irrational external constraints and requirements. Even existing bugs become part of the external constraints if they are in things you cannot fix but must "work around".

Those absurdities can come from two directions: Top-down, based on human need and mental models, or Bottom-up, based on faulty mathematical or software design models. Productive and efficient software development tools, by themselves, bring complexity above the programming language. Absurd business requirements, including cost-saving measures and dealing with buggy legacy systems not only bring complexity, but the workarounds they require bring even more absurd code.

Now, you may argue that abstractions make things simpler, and to some extent, they are. But abstractions only tend to mask complexity, and when things break or don't work as expected, that complexity re-surfaces. From the point of view of a typical user, if it's broken, you ask somebody else to fix it or replace it. But being a programmer is being that "somebody else" that takes responsibility into understanding, to some extent, that complexity.

You could argue that software should always be more usable first. And yet, usable software can be far more difficult to implement than software that is more "native" to its computing environment. All those manual pages, the flexible command-line parameters, those adaptive GUIs, pseudo-AIs, Clippy, and so on, bring enormous challenges to the implementation of any software because humans don't think like machines, and vice-versa. As long as users are involved, software cannot be fully "intuitive" for both users and computers at the same time. Computers are not "computing machines", but more sophisticated state machines made to run useful software for users. Gone are the days where room-sized computers just do "math stuff" for banks, where user interaction was limited to numbers and programmers. The moment there were personal computers, people didn't write "math-based software", but rather text-based games with code of dubious quality.

Complexity of software will always increase, because it can. Higher-level programming languages become more and more removed from the hardware execution model. Users keep asking for more features that don't necessarily "fit well", so either you add more buttons to that toolbar, or you create a brand new piece of software with its own interfaces. Even if by some reason computers stopped getting so much faster over time, it wouldn't stop users from asking for "more", and programmers from asking for "productivity".

My realization was that there has to be a balance between always increasing complexity and our ability to understand it. Sure, fifty years ago it would be reasonable to have a single person spend a few years to fully understand a complete computer system, but nowadays we just have to become specialized. Still, specialization is possible because we can understand a higher-level conceptual design of the other components rather than just an inconsistent mash up of absurdity. Design is the solution. Yes, things in software will always get bigger, but we can make it more reasonable to attempt to understand it all if, from afar, it was designed soundly rather than just accidentally "became". With design, complexity becomes a bit smaller and manageable, and even though only the programmers will have to deal with most of that complexity, good design produce qualities that become visible up to the end users. Good design makes tighter "vertical integration" easier since making sense of the whole system is easier.

Ultimately, making a better software product for the end users requires the programmer to take responsibility for the complexity of not only the software's code, but also of its environment. That means using sound design for any new code introduced, and accepting the potential absurdity of the rest. If you can't do that, then you'll never be more than a "code monkey".

Notes

  1. Many programmers tend to assume that their code is logically sound, and that their errors are mostly due to menial mistakes. In my experience, it's the other way around: The buggiest code is produced when code isn't logically sound, and this is what happens most of the time, especially in scripting languages that have weak or implicit typing.
  2. I use the term "complexity" more as the number of module connections than the average of module coupling. I find "complexity as a sum" more intuitive from the point of view of somebody that has to be aware of the complete system: Adding an abstraction layer still adds a new integration point between the old and new code, adding more things that could break. This is why I normally consider programming tools added complexity, even though their code completion and generation can make the programmers more productive.

Syndicated 2014-07-31 02:14:53 from Benad's Blog

106 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!