Older blog entries for benad (starting at number 112)

Final Fantasy VII, a Late Review

As I mentioned in a previous post, I started playing the game Final Fantasy VII so that I can, as objectively as possible, review it. I reserved my judgement until I complete the game, and 6 months later, or about 60 hours of game play, I finished it.

To be as fair as possible, I won't compare it to any of its predecessors in the series or its contemporaries in the genre, to see if it can stand on its own merits. I'll be lenient to what could have been caused by the technical limitations of the time (PlayStation 1), and even the problems introduced in the PC port of the game.

Story

Since the game makes its story front and centre of the experience, I'll start here.

It is difficult to summarize the story succinctly. On the one hand, it is a story of "eco-terrorists" that attempt to prevent this "evil corporation" from siphoning the "life energy" of the planet into power plants, for evil reasons. This is the same energy that is the source, in concentrated jewel form, of magical powers in the game, called "material". Of course, ecology + modern electrical technology + power plants + Japan = Godzilla. Also, for some evil reasons, the "bad guy" attempts to crash a comet on the planet so that he can harness all of that "life energy" to destroy the world or something.

On the other hand, it's the story of the immature man-children that compose that team of "rebels". It is mostly focused on the placeholder hero, Cloud, with a bad case of amnesia for anything other than his hate for the "bad guy".

It's bad. The characters are wholly unlikable, or laughably generic ("Aerith the flower girl" is an actual main character name in the game). The "amnesia" thing, which lasts almost the entire game, feels like a desperate mean to fill in plot holes in an otherwise uninteresting story. Character motivations are paper thin and selfish, surprising since that whole "comet will soon destroy all life" would have implied that the motivation could have been as simple as "saving the world", but no, it's only about revenge and selfish personal reasons. Twists and turns in the story are either caused by the characters complete emotional immaturity, or Deus Ex Machina that makes the plot of "Lost" well planned in comparison.

In any other medium except anime this story would be considered bad. But then, maybe this game is just some kind of anime with some RPG elements slapped on top of it.

Design

This game tries its hardest to mesh modern-day technology with fantasy elements, and it simply doesn't mix well. It also doesn't make sense why there could be such a world with modern warfare weaponry (automatic rifles, tanks, helicopters, planes) while magical items that allow anyone to perform magic is so commonplace. It's like Blade Runner with Japanese mystical elements of spirits and magic. It sounds really cool, yet this game manages to make it not work at all.

Oh, and the game never attempts to explain the impractically oversized swords of the hero, especially in a world with guns and magical powers.

Music and Sound

The sound elements are pretty bad in general.

The music has a few tracks that are quite good and memorable, but the rest of the time it's mostly a collection of fillers, even with repetition.

Gameplay

Let's start with the controls. In a battle, the camera's spinning makes it a challenge to properly target enemies. On the map, it feels like you're in a maze of invisible walls, with some that "slide along", and others that stop you in your tracks. Camera angles frequently change from one area to another, with little consistency in the controller directions, making simple movement difficult.

Before looking the RPG elements proper, let's look at the other "games": the puzzles, the "quick time events", and the mini-games. The puzzles are either too obtuse or easy. The "quick time events" are never good, in this game or elsewhere. The mini-games are completely different games than RPG, mostly racing, inserted into this game for some reason. You're forced to play each mini-game at least once, and each is horrible and would not stand on their own, so should be avoided as much as possible.

The RPG proper is fine, but not great. It is based around a "materia" system, those items that enable its wearer to perform magic. The materia items are placed into sockets in the weapon and armour of each character. Each materia has its own experience points, and when their reach higher levels they allow its wearer to perform more powerful attacks or at the highest level "spawn" a copy at level 1. Some materia can be paired with others to perform modifications.

Having only 3 characters in battles seems limited, and makes everything unbalanced if you lose a single one. This makes the game either a highly defensive one, or one when you want to defeat the opponent as quickly and safely as possible. You get a roster of up to 9 characters, but most of the time one of the 3 character choice is locked down to the "hero" Cloud, and there is little incentive to level up all the characters. At any rate, the characters are mostly interchangeable since their base stats and unique weapons are easily overwhelmed by the effects of the materia items. Given the emphasis on materia, setting up sets of materia on your characters takes a lot of time.

Speaking of time, the battles are too slow. Each attack takes several seconds to execute, and special attacks can take up to a minute. I avoided using the "summon materia" special attacks for that reason. Real-clock time became an important resource while playing this game, making "grinding" to level up characters take too much time.

Generally, the game as a lot of depth, but not enough to justify the 60+ hours of game time. Most of that time was stretched out from unnecessary long battle animations and unskippable cut scenes.

Conclusion

The game is fine, but not great. There are too many flaws and annoyances to merit playing it to the end. Its large budget is quite visible on screen, it has a lot of depth, but what was lacking was fun. The story was bad, the characters cliché and unlikeable, the battles slow, the controls poor, the mini-games horrendous, all atop of an average RPG.

Syndicated 2014-12-29 16:40:04 from Benad's Blog

Modern MVC in the Browser

A few months ago, I started learning the AngularJS framework for web development. It is a quite clever framework that allows you to "augment" your HTML with markup to map placeholders with elements in your object model. It reminded me of JSP and markup-based SWING frameworks I used a decade ago. Since then, though, I realized that this template-based approach has quite a few limitations that bother me. First, as it is based on templates, it doesn't handle well highly dynamic elements that generate markup based on context. Second, complex, many-to-many mappings between the view and the model are difficult to express. For example, it is difficult to express a value placeholder that is the sum of multiple elements in the models without having to resort calling a custom function, or having the change in the view trigger the corresponding change in multiple locations in the model without, again, resorting to a custom function. Third, it is a framework that needs to keep track of the model/view mapping, so it is all-encompassing and heavy in configuration.

So, let's break down the problem and look individually at the view, model and controller as separate modules, and see if there could be a better, more "modern" approach.

The biggest problem with the "model" in JavaScript is that it lacks safely hiding properties as functions, as done natively in C# for example, or manually through "getter" and "setter" functions in JavaBeans. Because of that, it becomes difficult to take any normal object and add a layer on top of it that would implement an observer pattern automatically. Instead, you can use something like the model implementation of Backbone.js that fully hides the model behind a get/set interface that can automatically trigger change events to listener objects. It may not be elegant, but it works well.

For the "view", HTML development doesn't support well the concept of a "custom control" or "custom widget" in other GUIs. The view is mostly static, and JavaScript can change the DOM to some extent, and at high cost. In contrast, other GUI systems are based around rendering controls ("GUI controls", not to be confused with the controller in MVC) on a canvas, and the compositing engine takes care of rendering on screen only the visible and changed elements. A major advantage of that approach is that each control can render in any way it pleases without having to be aware of the lower-level rendering engine, be it pixels, vectors or HTML. You could simulate a full refresh of the DOM each time something (model or controller) changes the GUI in HTML, but the performance would be abysmal. Hence, the React library from Facebook and Instagram, which supports render-based custom controls but using a "virtual DOM", akin to "bitmasks" in traditional GUIs, so that only effective DOM changes are applied.

Finally, for the "controller", the biggest issue is how to have both the model and the view update each other automatically, on top of the usual business logic in the controller, without creating cyclic loops. React's approach, named Flux, is a design pattern where you always let events flow from the model to the view, and never (directly) in the other direction. My gripe with that is that it is merely a design pattern that cannot be enforced. This reminds me of the dangers of using the WPF threading model as a means to avoid concurrency issues: Forget to use the event dispatcher a single time, and you will create highly difficult to debug crashes. A new approach, called functional reactive programming is kind of like what dependency injection did for module integration, but for events. Essentially it is a functional way of manipulating channels of events between producers and consumers outside of the code of each producer and consumer. This may sound quite an overhead compared to the inline and prescriptive approach of Flux, but as soon as you build a web page heavy on asynchronous callbacks and events coming from outside the page, having all of that "glue" in a single location is a great benefit. An implementation of FRP for JavaScript, Bacon.js, has a great example of how FRP greatly reduces those countless nested callbacks that are commonplace in web and Node development.

Combined, Backbone.js, React and Bacon.js offer a compelling alternative to control- and template-heavy browser MVC frameworks, and at minimum prevent you from being "locked-in" a complex and difficult to replace framework.

Syndicated 2014-11-26 01:44:49 from Benad's Blog

A Tale of Two Shells

Last year, I completed "properly" learning the bash ("Bash"?) shell, using a combination of the book "Learning the bash Shell" and reading from start to finish the gigantic "man" page. And that was enough to convince me that, regardless of its ubiquity, I don't like it much, be it as a scripting language or as a command-line shell.

Having already learned tcsh, because it was the default shell on Mac OS X and is still popular, I was ready to try out more modern shells, rather than ones stuck in the 80s.

First, on Windows, DOS is quite archaic and annoying. Simple things like sleeping for a second require unintuitive workarounds. DOS batch scripting is painfully difficult, so I was eager to find something better. And since Windows 7, this strange "PowerShell" is now installed by default, and was heralded as a revolutionary step in command-line shells. Is it?

After reading a few tutorials, including this "free ebook" on powershell.com, things became clear. Windows PowerShell is essentially a shell built atop .NET that manipulates streams of objects rather than plain text across pipes, though thankfully formats the objects as plain text on the console screen by default. It provides many "cmdlets" that can manipulate that stream in a more convenient way than grep and awk. For example, listing all files in a directory greater than a megabyte is trivial in PowerShell, while on UNIX shell requires an awkward (pun intended) combination of extracting character positions and arithmetic. The "PowerShell IDE", also provided with PowerShell, can even perform tab-completion of the fields of the objects out of the current pipe, making it easy to extract their attributes. Because so much of the Windows internals are accessible using COM and .NET, it is easy to perform system administration with it, for example installing system services and querying their status. The only major issue I've found with PowerShell is that executing PowerShell scripts is completely disabled by default until unlocked by an administrator. Also, as a minor gripe, the version of PowerShell that came out of the box with Windows 7 is quite outdated. Overall, if there was a "grand vision" of ".NET" in Windows, it is best represented by PowerShell.

Back on UNIX-like systems, it seems like users are quite happy with old shells, or at least incremental evolutions of the old "Bourne shell" and Berkeley's "C Shell". Looking around, I found "fish", the "Friendly Interactive SHell". I liked its ironic tagline of "Finally, a command line shell for the 90s", since it was initially released in 2005. It is, indeed, friendly, as it has a deliberately limited set of features, and has default out-of-the-box functionality that makes interactive use enjoyable. It was built around a comprehensive design document that explicitly favours usability over compatibility with older or popular shells. The results are spectacular: Everything has colour, TAB and arrow keys completion with a type ahead preview in light grey, inline argument completion for most commands (including "man") that present interactively all the options and their meaning automatically extracted from the "man" pages, editing the configuration through a built-in web service, applying configuration changes to all shells instantly, I could go on. Its scripting is quite limited, but that may be a good thing considering the Shellshock bug (Not that "fish" has no security holes, but at least they're not "as designed").

Personally, I am ready to move to both PowerShell and "fish" for day-to-day use. While they both don't have much in common to older shells, they are far more usable. I highly recommend to all command-line users.

Syndicated 2014-10-28 01:37:41 from Benad's Blog

Moniker, the Security Weak Point

By the time I heard about the "Shellshock bug" security hole in the morning of September 25, already the small Debian Linux server that I built up in Ramnode to host my web site patched that security hole by itself. At any rate, my web site hosts only static pages, so it was never impacted.

While I am in control of the security of my web site starting from the Linux kernel, to the web server, up to the web pages it serves, I'm still dependent on its hosting service (Ramnode) to be secure. Another potential danger would be for someone to hijack the "benad.me" domain name, and make it point to a version of the site filled with malware and viruses. Sadly, this almost happened.

Back in 2008 I registered my domain through the registrar Moniker, which used to be recommended partly based on its security. They implemented additional features that one could pay to "lock" the domain and prevent unauthorized transfers from someone that would steal your user name and password. Since then though, the company was bought by another company, and what is now called Moniker is only by name, both in terms of staffing and software.

I did notice a difference in tone in email communications from the new Moniker. They seemed to be highly focused on domain name auctions, and would automatically auction off expired domains. This felt like a conflict of interest, as Moniker would deride higher profit auctioning off your domain than helping you renewing it. Of course, they would never do that on valuable customers that do "domain speculation" and own a large number of (unused) domains, but still that raises the suspicion that the company was sold based on the number of domains it had and how much money they can extract from large speculators rather than providing valuable customer service.

The "new" Moniker had a security hole in 2013, and to fix that Moniker forced users to change their password the next time they logged in. Note though that this happened with the old version of that web site. This summer, the parent company that bought Moniker (and its name) scrapped the old site's code and replaced it with a new broken, buggy interface. The new interface also brought with it worse security, and made the domain locking feature completely ineffective.

By early October, Moniker sent an email to all its users saying that for untold security reasons, all the account passwords would be reset. The shock was that the email contained both the user names and passwords of all the user's accounts. I was shocked that my old Moniker account, identified by a standard-looking user name, was placed under a parent, numerically-identified user name I've never seen, and another numerical sub-account that was created without my knowledge. It should be noted that I could never access the numerical sub-account, even when using the password provided in the email. Also, the email said that your new passwords must fit security requirements, including the use of at least one "special character", even though the passwords provided in the email didn't contain any special character, and when attempting to change passwords, it would refuse most special characters.

OK, I'm not a security expert, but sending user names and passwords in an email, refusing special characters (which would indicate that they don't use bcrypt), and resetting the passwords of all users may indicate that they were hacked. Badly. Moniker cited the Shellshock bug, but as reports of stolen domains started to appear, a user came forth saying that the security hole predated Shellshock by a month.

So, I was convinced that Moniker had a pattern of behaviour of not taking security seriously, that is until they experience a mass exodus of their customers. I started the process of domain name transfer the day after they announced the password reset, and I would recommend everybody else to do the same. I transferred to Namecheap. Despite its name, in my case the price was the same, though as a test I created a new empty account before the transfer, and already I could attest that they take security seriously, including emails for account activity (using secondary email addresses in rotation) and 2-factor authentication (using SMS for now). I completed the transfer yesterday, so that would explain why there was a little bit of downtime when resolving my domain name.

Syndicated 2014-10-15 01:21:21 from Benad's Blog

Eventual Consistency, Squared

Looking back at my article "The Syncing Problem", implementing a generic DVCS seems like a relatively straightforward solution. Actually, if the "data to sync" was simplified to plain text, existing DVCS like git or Mercurial may be sufficient. But there is a fundamental problem I glossed over that has huge ramifications on the design of the DVCS that make existing DVCS implementations dangerous to use.

In modern "Internet-connected" appliances, there are two storage solutions: On-device, and "in the cloud". It is the "cloud" storage that is going to be used for the devices to communicate to each other indirectly when performing data synchronization. There is though a huge behavioural difference between on-device and cloud storage: The cloud storage is "eventually consistent". Beneath its API, the cloud storage itself may also be distributed across machines, and modified data can take a little while to propagate to other machines. Essentially, if you upload a file from one device, it may take a little while for another device to see the change.

Sadly, whatever conflict resolution used by a cloud storage provide is unreliable, because their behaviour is either undocumented or inconsistent. Locking files on such storage may not be possible either. Worse, internal synchronization issues at the cloud provider may make their internal synchronization speed so inconsistent to make it unreliable as a means to communicate information between devices quickly. Almost all VCS (distributed or not) assume a reliable storage area for the version repository. Hosted VCS guarantee ACID. No DVCS was made to push revision information to an unreliable storage, and use that as the primary means to exchange information.

The easiest solution for this is to design a DVCS that supports "write-only" repositories. If the storage key (file name) contains the checksum of the data it contains, it may be possible to have multiple clients writing to the same storage area changes to the shared repository. Even if listing available "data blocks" is inconsistent on the shared storage, all it can do is augment the "knowledge" of what exists in the repository against the repository stored on local storage. The atomicity of the storage blocks should be as close as possible to the atomicity of a transactional version control delta, especially since the cloud storage may make information appear on other devices out-of-order of how they were written. That could make those "patch files" larger than well-optimized VCS, but on cloud storage we may not have any other option.

Sure, a write-only repository may be a big issue if the files in the version control are too large or if storage is limited, but then most VCS tend to avoid deleting historical data, and when they support "cleaning up a repository", the solutions are clumsy and error-prone. In our case, if a device prematurely deletes older historical data in the shared storage, by being unaware that other devices were synced at older versions and may branch from there, then this would be tantamount to using the share storage to host only the latest version and nothing else. All to say, deleting historical data in a shared, eventually-consistent storage is a difficult problem that may involve a lot of tuning based on how long devices can stay unsynchronized before considering them "lost", compared to how fast the cloud storage is expected to be consistent.

Syndicated 2014-10-04 15:05:19 from Benad's Blog

Client-Side JavaScript Modules

I dislike the JavaScript programming language. I despise Node.js for server-side programming, and people with far more experience than me with both also agree, for example Ted Dziuba in 2011 and more recently Eric Jiang. And while I can easily avoid using Node as a server-side solution, the same cannot be said about avoiding JavaScript altogether.

I recently discovered Atom, a text editor based on a custom build of WebKit, essentially running on HTML, CSS and JavaScript. Though it is far from the fastest text editor out there, it feels like a spiritual successor to my favourite editor, jEdit, but based on modern web technologies rather than Java. The net effect is that Atom seems like the fastest growing text editor, and with its deep integration with Git (it was made by Github), it makes it a breeze to change code.

I noticed a few interesting things that were used to make JavaScript more tolerable in Atom. First, it supports CoffeeScript. Second, it uses Node-like modules.

CoffeeScript was a huge discovery for me. It is essentially a programming language that compiles into JavaScript, and makes JavaScript development more bearable. Its syntax reminds me a bit of the syntax difference between Java and Groovy. There's also the very interesting JavaScript to CoffeeScript converter called js2coffee. I used js2coffee on one of my JavaScript module, and the result was far more readable and manageable.

The problem with CoffeeScript is that you need to integrate its compilation to JavaScript somewhere. It just so happens that its compiler is a command-line JavaScript tool made for Node. A JavaScript equivalent to Makefiles (actually, more like Maven) is called Grunt, and from it you can call the CoffeeScript compiler directly, on top of UglifyJS to make the generated output smaller. All of these tools exist under node_modules/.bin when installed locally using npm, the Node Package Manager.

Also, by writing my module as a Node module (actually, CommonJS), I could also use some dependency management and still deploy it for a web browser's environment using Browserify. I could even go further and integrate it with Jasmine for unit tests, and run them in a GUI-less full-stack browser like PhantomJS, but that's going too far for now, and you're better off reading the Browserify article by Bastian Krol for more information.

It remains that Browserify is kind of a hack that isn't ideal for running JavaScript modules in a browser, as it has to include browser-equivalent functionality that is unique to Node and isn't optimized for high-latency asynchronous loading. A better solution for browser-side JavaScript modules is RequireJS, using the module format AMD. While not all Node modules have an AMD equivalent, the major ones are easily accessible with bower. Interestingly, you can create a module that can as AMD, Node and natively in a browser using the templates called UMD (as in "Universal Module Definition"). Also, RequireJS can support Node modules (that don't use Node-specific functionality) and any other JavaScript library made for browsers, so that you can gain asynchronous loading.

It should be noted that bower, grunt and many command-line JavaScript tools are made for Node and installed locally using npm, So, even if "Node as a JavaScript web server" fails (and it should), using Node as an environment for local JavaScript command-line tools works quite well and could have a great future.

After all is said and done, I now have something that is kind of like Maven, but for JavaScript using Grunt, RequireJS, bower and Jasmine, to download, compile (CoffeeScript), inject and optimize JavaScript modules for deployment. Or you can use something like CodeKit if you prefer a nice GUI. Either way, JavaScript development, for client-side software like Atom, command-line scripts or for the browser, is finally starting to feel reasonable.

Syndicated 2014-08-15 03:14:41 from Benad's Blog

The Case for Complexity

Like clockwork, there is a point in a programmer's career where one realizes that most programming tools suck, that not only they hinder the programmer's productivity, but worse may have an impact on the quality of the product for end users. And so, there are cries of the absurdity of it all, some posit that complex software development tools must exist because some programmers like complexity above productivity, while others long for the days where programming was easier.

I find these reactions amusing. Kind of a middle-life crisis for programmers. Trying to rationalize their careers, most just end up admitting defeat for a professional life of mediocrity, by using dumber tools and hoping to avoid the main reason why programming can be challenging. I went into that "programmer's existential crisis" in my third year as a programmer, just before deciding on making it a career, but I went out of it with what seems to be a conclusion seldom shared by my fellow programmers. To some extent this is why I don't really consider myself a programmer but rather a software designer.

The fundamental issue isn't the fact that software is (seemingly) unnecessarily complex, but rather trying to understand the source of that complexity. Too many programmers assume that programming is based on applied mathematics. Well, it ought to be, but programming as practiced in the industry is quite far from its computer science roots. That deviation isn't due only from programming mistakes, but due to the more irrational external constraints and requirements. Even existing bugs become part of the external constraints if they are in things you cannot fix but must "work around".

Those absurdities can come from two directions: Top-down, based on human need and mental models, or Bottom-up, based on faulty mathematical or software design models. Productive and efficient software development tools, by themselves, bring complexity above the programming language. Absurd business requirements, including cost-saving measures and dealing with buggy legacy systems not only bring complexity, but the workarounds they require bring even more absurd code.

Now, you may argue that abstractions make things simpler, and to some extent, they are. But abstractions only tend to mask complexity, and when things break or don't work as expected, that complexity re-surfaces. From the point of view of a typical user, if it's broken, you ask somebody else to fix it or replace it. But being a programmer is being that "somebody else" that takes responsibility into understanding, to some extent, that complexity.

You could argue that software should always be more usable first. And yet, usable software can be far more difficult to implement than software that is more "native" to its computing environment. All those manual pages, the flexible command-line parameters, those adaptive GUIs, pseudo-AIs, Clippy, and so on, bring enormous challenges to the implementation of any software because humans don't think like machines, and vice-versa. As long as users are involved, software cannot be fully "intuitive" for both users and computers at the same time. Computers are not "computing machines", but more sophisticated state machines made to run useful software for users. Gone are the days where room-sized computers just do "math stuff" for banks, where user interaction was limited to numbers and programmers. The moment there were personal computers, people didn't write "math-based software", but rather text-based games with code of dubious quality.

Complexity of software will always increase, because it can. Higher-level programming languages become more and more removed from the hardware execution model. Users keep asking for more features that don't necessarily "fit well", so either you add more buttons to that toolbar, or you create a brand new piece of software with its own interfaces. Even if by some reason computers stopped getting so much faster over time, it wouldn't stop users from asking for "more", and programmers from asking for "productivity".

My realization was that there has to be a balance between always increasing complexity and our ability to understand it. Sure, fifty years ago it would be reasonable to have a single person spend a few years to fully understand a complete computer system, but nowadays we just have to become specialized. Still, specialization is possible because we can understand a higher-level conceptual design of the other components rather than just an inconsistent mash up of absurdity. Design is the solution. Yes, things in software will always get bigger, but we can make it more reasonable to attempt to understand it all if, from afar, it was designed soundly rather than just accidentally "became". With design, complexity becomes a bit smaller and manageable, and even though only the programmers will have to deal with most of that complexity, good design produce qualities that become visible up to the end users. Good design makes tighter "vertical integration" easier since making sense of the whole system is easier.

Ultimately, making a better software product for the end users requires the programmer to take responsibility for the complexity of not only the software's code, but also of its environment. That means using sound design for any new code introduced, and accepting the potential absurdity of the rest. If you can't do that, then you'll never be more than a "code monkey".

Notes

  1. Many programmers tend to assume that their code is logically sound, and that their errors are mostly due to menial mistakes. In my experience, it's the other way around: The buggiest code is produced when code isn't logically sound, and this is what happens most of the time, especially in scripting languages that have weak or implicit typing.
  2. I use the term "complexity" more as the number of module connections than the average of module coupling. I find "complexity as a sum" more intuitive from the point of view of somebody that has to be aware of the complete system: Adding an abstraction layer still adds a new integration point between the old and new code, adding more things that could break. This is why I normally consider programming tools added complexity, even though their code completion and generation can make the programmers more productive.

Syndicated 2014-07-31 02:14:53 from Benad's Blog

Running Final Fantasy VII (Steam) on Mac

The recent re-release of Final Fantasy VII on PC and Steam at last made the game easily accessible on modern computers. No need for MIDI drivers or the Truemotion codec, since they were replaced with Ogg-vorbis music and On2 VP8 video files. As an added bonus, the game can sync the save files "to the cloud", and has a way of "boosting" the character stats in those save files to make the game easier if necessary.

But then, the game is made only for Windows, and I only have a Mac. Sure, I can use Bootcamp or a virtual machine, but I'd rather play it on the Mac itself than install and maintain a Windows machine. And not everybody has spare Windows licenses anyway.

So I attempted to use Wine, the "not a Windows emulator". I don't know why each time I use Wine, I'm still pleasantly surprised at how well it works. Sure, the very early versions of Wine were quite unstable, but that was a decade ago. Nowadays, at least 90% of Windows software can run quite well in Wine, without too much effort or hacking, and it just keeps getting better each time I try it.

Now, the latest version of Wine can run Steam quite well. Inside of that Steam, installing and running Final Fantasy VII worked flawlessly.

Setting up Wine and Steam on Macs is quite easy:

  1. Install Xcode. Go in Xcode, Preferences, Downloads and install the command-line tools.
  2. Install MacPorts.
  3. In the Terminal, run sudo port install wine-devel winetricks.
  4. Run winecfg, and close the window.
  5. Run winetricks steam.
  6. Run env WINEPREFIX="$HOME/.local/share/wineprefixes/steam" wine C:\\windows\\command\\start.exe /Unix ~/.local/share/wineprefixes/steam/dosdevices/c:/users/Public/Start\ Menu/Programs/Steam/Steam.lnk

And that's it. When installing the game I checked the option to install a shortcut in the Start menu, and I've made a small tool called run_desktop.pl to launch the game directly from the Mac. For example, I would start it with perl run_desktop.pl ~/.local/share/applications/wine/Programs/Steam/FINAL\ FANTASY\ VII.desktop.

Syndicated 2014-07-12 17:50:24 from Benad's Blog

Resuming Final Fantasy with VII

I've never played Final Fantasy VII. Why? Simply put, I've never owned a PlayStation. Considering that countless players say it is the best Final Fantasy game, if not the dubious claim that it's the best video game ever made, I have to try it. Since both FF VII and FF VIII were released on Steam (though Windows-only), they are now finally easily (and legally) accessible to me without having to own a PlayStation console.

I'm skipping too much context. FF VII was a highly influential game. Not only was it the first 3D Final Fantasy, but also one of the first (Japanese) RPG played by a new generation of video game players. It was one of the highest sold video game and was hugely popular outside of Japan. So how come I avoided it for all those years?

Back then, I played pretty much all Final Fantasy games available in North America up to Final Fantasy VI, and since then I played the previously unreleased ones (FF II, III and V). I even played many of the side-franchises of Final Fantasy (Legend, Mystic Quest, and so on), and pretty much all RPGs made by Square available on Nintendo consoles.

But then, in a shrewd move, Square became Sony-exclusive. In exchange, Sony would create a huge marketing campaign to promote FF VII across all of its product lines (movies, music, publications, electronics, DVDs...). At the same time, it heralded a generation of early 3D games that were mixed with pre-rendered backdrops (akin to classical animation) and FMV cut scenes, making video games more accessible, movie-like and appealing to non-gamers. Basically, it became the forbearer of everything I hated about high-budgeted video games that were more movies "for the masses" than games.

Over the years, I couldn't escape its influence, even in other media. I had the misfortune of seeing the overly pretentious Final Fantasy movies (Spirits Within and that FF VII sequel/thing), and the effect it had on Square porting their Nintendo-based RPGs to the PlayStation by replacing important cut scenes with ill-conceived FMVs. Not playing anything else from the Final Fantasy series was my own kind of "rebellion".

What made it infuriating to me was that so many people considered FF VII the "best video game ever", but without having played any previous game in the series, summarily dismissed because they don't contain FMVs or because they aren't in 3D. Those that would defend FF VI as being objectively the better game would also be dismissed using the dubious argument that people prefer the first Final Fantasy they played. I'm fortunate enough that the first Final Fantasy game I played was the original game in the series (the one released in the NES in North America), and I clearly don't consider it the best, far from it. At the same time, I do hold Final Fantasy VI as the best Final Fantasy game up to that point in the series, the best RPG that I've ever played and a masterpiece. I'd be pleasantly surprised if FF VII was better than VI and lived up to its hype and marketing.

Now, 15 years later, I'm emotionally distanced enough from that teen rebellion to attempt to play this game more objectively and on its own merit, apart from the Sony hype (and my hate of Sony) and the opinion of other uneducated players. I already played a few hours of the game and I already have lots to say, but I reserve judgment until I complete the game. For now, I'll follow up on how I'm able to play that game on my Mac in the first place.

Syndicated 2014-07-12 17:23:36 from Benad's Blog

Google Play Music in Canada

One of the things I noticed when I got the Nexus 5 phone was that it pales as an audio player compared to the iPhone. What I didn't mention is also how much the built-in Google Play Music app was spartan. All I could do to play music was to transfer audio files by a USB cable. While it is nice that it auto-scans the storage for music file, I would have preferred a cleaner approach where I could manually add to my music collection specific audio files.

Since I recycled my iPhone 4S as my main music and podcast player, I seldom looked at the Play Music app, until in early June I casually looked at it after an app update and noticed it had quite a few more options. Basically, "Google Play Music All Access" was now available in Canada.

The first thing that jumped at me is that you can upload your music files in your Play Music library, for free. It can even automatically import your entire iTunes library. I tried it with mine, which has about 3800 songs and two dozen playlists, and it worked quite well. There was a minor issue with the file importer ignoring all music files that had accents in their file names, so I had to import those files manually, but still, compared to the non-free iTunes Match service, the experience was amazingly smooth. To compare, Play Music never, ever had any issue importing and playing back my non-matched audio files (meaning, songs not part of the Play Music store), while I constantly run into issues and general slowness with iTunes Match. Yes, the $30 / year service from iTunes is worse than the free Google service, and the Google service doesn't require you to install iTunes in the first place, for all you iTunes haters out there.

So, for the "All Access" thing, it is quite similar to Rdio. I talked about music streaming services in the past, but to give you a summary of the what happened since in Canada, not much. There's still only Rdio, Deezer, and a few small ones with even smaller libraries. So, doing the legal thing with copyright in Canada still sucks.

After using Rdio for a few years, there are still a few things that annoy me, especially given that it's quite an expensive service ($10 per month):

  • By default, Rdio is still highly "social", with everything shared and public by default. And by that I mean your playback history, playlists, listening habits, what you're listening to right now, and more. They compromised by introducing a "private" mode where things are shared with your Rdio friends, but still that sucks.
  • The music collection oddly seems to be shrinking over time. Sure, I noticed that a lot of albums have exact duplicates, with the only distinction being the label that published the album. This creates the weird effect that if you add to your collection one of those albums, but chance it may get delisted and you'll have to hunt the new "owner". Still, half the time stuff gets delisted and never comes back, almost as often as those "1-year contracts" you see on Netflix Canada.
  • Playlist management sucks. Sure, they just added the functionality to manage playlists on iOS, but still, on all platforms you cannot manipulate more than a song at the same time. For example, breaking the album "Mozart: The Complete Operas" into manageable playlists took hours, moving songs one by one. Shift-click?
  • Search, and pretty much the entire GUI on desktop or iOS, is somehow slow. Looking for "that song" doesn't work well and can be frustrating.
  • If you mark songs to be synchronized to mobile devices, it will be so for all your devices. Doesn't make sense if you only carry one device with you, and on top of that the massive bandwidth usage this can generate if you have four devices downloading songs at the same time.

So, I had to try "All Access" from Google, at least to compare. Also, because the first month is free and if you sign up before June 30 (meaning, today or tomorrow), it is $8 instead of $10 per month, for life.

So, how does it stack up compared to Rdio? Let's start with the cons first, to be fair.

Cons

  • The user reviews from the Play Music store don't show up in the player view. I miss the inline user reviews from Rdio.
  • No global playback history. Sure, history of radios show up in the current queue, but once the queue is cleared, it is gone. Also, playback queues aren't shared across devices, unless you save them as playlists each time.
  • Mobile support is limited to iOS and Android.
  • The "Thumbs up" automatic playlist isn't sorted the same way across devices for some reason.

Pros

  • All your music is there. Personal, bought on Play Music, or "All Access".
  • Search. It's Google. Search is amazing and instant.
  • The GUI is amazingly fast on all platforms.
  • The album library is cleaner (no label duplicates).
  • The library is somewhat bigger than Rdio. For every one "vanishing" album on Rdio, there are three or four that exist only on All Access.
  • Playlist and metadata editing is great. Shift-click to select multiple songs works, and it quite powerful.
  • Song caching for offline playback is done per-device, not as a global setting as on Rdio.

Overall, Google Play Music All Access, apart from its stupid name ("Rdio" has four letters) wins hands down. I think I'll transition from Rdio to All Access in the next few months. Pricing is identical, for feature-wise, All Access is much cleaner, faster and bigger that Rdio. And the (amazingly) completely free "My Library" music upload service is just icing on the cake.

As for iTunes, Apple should get their act together. iTunes Match is plain buggy and slow, searching (local or in the store) is even slower, their Podcasting support is broken beyond repair (avoid it completely), and everybody on Windows hate iTunes, for good reason. Sure, iTunes Radio, Beats, and so on, but they're all US-only, so why should I care if it's going to take until 2016 until we see anything of them in Canada? Same can be said of Spotify, Pandora, Amazon Music, and so on: Until you show up internationally outside of that Silicon Valley bubble, put up or shut up. Google Play Music All Access is there internationally now.

Syndicated 2014-06-29 00:05:25 from Benad's Blog

103 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!