Older blog entries for wlach (starting at number 143)

NIXI Update

I’ve been working on a new, mobile friendly version of Nixi on-and-off for the past year and a bit. I’m not sure when it’s ever going to be finished, so I thought I might as well post the work-in-progress, which has these noteworthy improvements:

  • Even faster than before (using the Bootstrap library behind the scenes, no longer using slow canvas library to update map)
  • Sexier graphics (thanks to the aforementioned Bootstrap library)
  • Now uses client side URLs to keep track of state as you navigate through the site. This allows you to bookmark a favorite spot (e.g. your home) and then go back to it later. For example, this link will give you a list of BIXI docks near Station C, the coworking space I belong to.

If you use BIXI at all, check it out and let me know what you think!

nixi screenshot

Syndicated 2013-08-25 21:28:53 from William Lachance's Log

Simple command-line ntp client for Android and FirefoxOS

Today I did a quick port of Larry Doolittle’s ntpclient program to Android and FirefoxOS. Basically this lets you easily synchronize your device’s time to that of a central server. Yes, there’s lots and lots of Android “applications” which let you do this, but I wanted to be able to do this from the command line because that’s how I roll. If you’re interested, source and instructions are here:

https://github.com/wlach/ntpclient-android

For those curious, no, I didn’t just do this for fun. :) For next quarter, we want to write some Eideticker-based responsiveness tests for FirefoxOS and Android. For example, how long does it take from the time you tap on an icon in the homescreen on FirefoxOS to when the application is fully loaded? Or on Android, how long does it take to see a full list of sites in the awesomebar from the time you tap on the URL field and enter your search term?

Because an Eideticker test run involves two different machines (a host machine which controls the device and captures video of it in action, as well as the device itself), we need to use timestamps to really understand when and how events are being sent to the device. To do that reliably, we really need some easy way of synchronizing time between two machines (or at least accounting for their differences, which amounts to about the same thing). NTP struck me as being the easiest, most standard way of doing this.

Syndicated 2013-07-08 23:22:49 from William Lachance's Log

Proof of concept Eideticker dashboard for FirefoxOS

[ For more information on the Eideticker software I'm referring to, see this entry ]

I just put up a proof of concept Eideticker dashboard for FirefoxOS here. Right now it has two days worth of data, manually sampled from an Unagi device running b2g18. Right now there are two tests: one the measures the “speed” of the contacts application scrolling, another that measures the amount of time it takes for the contacts application to be fully loaded.

For those not already familiar with it, Eideticker is a benchmarking suite which captures live video data coming from a device and analyzes it to determine performance. This lets us get data which is more representative of actual user experience (as opposed to an oft artificial benchmark). For example, Eideticker measures contacts startup as taking anywhere between 3.5 seconds and 4.5 seconds, versus than the 0.5 to 1 seconds that the existing datazilla benchmarks show. What accounts for the difference? If you step through an eideticker-captured video, you can see that even though something appears very quickly, not all the contacts are displayed until the 3.5 second mark. There is a gap between an app being reported as “loaded” and it being fully available for use, which we had not been measuring until now.

At this point, I am most interested in hearing from FirefoxOS developers on new tests that would be interesting and useful to track performance of the system on an ongoing basis. I’d obviously prefer to focus on things which have been difficult to measure accurately through other means. My setup is rather fiddly right now, but hopefully soon we can get some useful numbers going on an ongoing basis, as we do already for Firefox for Android.

Syndicated 2013-05-06 22:23:16 from William Lachance's Log

Further meditative practice

biodome

Okay, remember last time when I said I was going to continue my “sham of a human existence” and not commit to a Zen practice? Well, I came back to the idea sooner than I thought: the experience was just too compelling for me not to do some further exploration. In some strange coincidence, Hacker News had a great thread on meditation just after I wrote my last blog entry, where a few people recommended a book called Mindfulness in Plain English. I figured doing meditation at home didn’t involve any kind of huge commitment (don’t like it? just stop!), so I decided to order it online and give it a try.

Mindfulness in Plain English is really fascinating stuff. It describes how to do a type of Vipassana (insight) meditation, which is practiced with a great deal of ritual in places like Thailand, India, and Sri Lanka. The book however, strips out most of the ritual and just gives you a set of techniques that is quite accessible for a (presumably) western audience. It seems like the goal of Vipassana is quite similar to that of Zen (enlightenment; release from attachment and dualism), though the methods and rituals around it are quite different (e.g. there are no koans). Perhaps it’s akin to the difference between GIMP and Photoshop: as those two programs are both aimed at the manipulation of images, both Vipassana and Zen are aimed at the manipulation of the mind. There are differences in the script of how to do so, but the overarching purpose is the same.

Regardless of the ultimate differences between the two traditions, the portion of the Vipassana method that the book describes is almost exactly that which I tried at the Zen workshop: sit still and pay attention to your breathing. There’s a few minor differences in terms of the suggested posture (the book recommends either sitting cross legged or the lotus positions) and the focal point (Mindfulness recommends the tip of the nostrils). But essentially it’s the same stuff. Focus on the breath — counting it if necessary, rince, repeat.

As I mentioned before, this is actually really hard to do properly. The mind keeps wandering and wandering on all sorts of tangents: plans, daydreams, even thoughts about the meditation itself. Where I found Mindfulness in Plain English helpful was in the advice it gave for dealing with this “monkey mind” phenomenon. The subject is dealt with throughout the book (with two chapters on it and nothing else), but all the advice boils down to “treat it as part of the meditation”. Don’t try to avoid it, just treat it as something to be aware of in the same way as breathing. Then once you have acknowledged it, move the attention back to the breath.

“Mindfulness” can be described as a non-judgemental awareness of what we are doing (and what we are supposed to be doing). Every time a distraction is noticed, felt, and understood, you’ve just experienced some approximation of the end goal of the meditation. Like it is with other things (an exercise regimen, learning to play a musical instrument), every small victory should push you further and the path to where you want to go. With enough practice, mindfulness might just become part of your day-to-day experience.

Or so I’m told. Up to now, I haven’t enjoyed any longlasting effects aside from (possibly?) a bit more mental clarity in my day-to-day tasks. But I’ve found the meditation practice to be extremely interesting both from the point of view of understanding my own thought, as well as being rather relaxing in and of itself. So while I’m curious as to what comes next, but am happy enough with things as they are in the present. More updates as appropriate.

Syndicated 2013-04-28 20:55:18 from William Lachance's Log

Actual useful FirefoxOS Eideticker results at last

Another update on getting Eideticker working with FirefoxOS. Once again this is sort of high-level, looking forward to writing something more in-depth soon now that we have the basics working. :)

I finally got the last kinks out of the rig I was using to capture live video from FirefoxOS phones using the Point Grey devices last week. In order to make things reasonable I had to write some custom code to isolate the actual device screen from the rest of capture and a few other things. The setup looks interesting (reminds me a bit of something out of the War of the Worlds):

eideticker-pointgrey-mounted

Here’s some example video of a test I wrote up to measure the performance of contacts scrolling performance (measured at a very respectable 44 frames per second, in case you wondering):

Surprisingly enough, I didn’t wind up having to write up any code to compensate for a noisy image. Of course there’s a certain amount of variance in every frame depending on how much light is hitting the camera sensor at any particular moment, but apparently not enough to interfere with getting useful results in the tests I’ve been running.

Likely next step: Create some kind of chassis for mounting both the camera and device on a permanent basis (instead of an adhoc one on my desk) so we can start running these sorts of tests on a daily basis, much like we currently do with Android on the Eideticker Dashboard.

As an aside, I’ve been really impressed with both the Marionette framework and the gaiatests python module that was written up for FirefoxOS. Writing the above test took just 5 minutes — and the code is quite straightforward. Quite the pleasant change from my various efforts in Android automation.

Syndicated 2013-04-22 15:32:51 from William Lachance's Log

The need for a modern open source email client and Geary’s fundraiser

One of my frustrations with the Linux desktop is the lack of an email client that’s in the same league as GMail or Apple’s mail.app. Thunderbird is ok as far as it goes (I use it for my day-to-day Mozilla correspondence) but I miss having a decent conversation view of email (yes, I tried the conversation view extension — it didn’t work particularly well) and the search functionality is rather slow and cumbersome. I’d like to be optimistic about these problems being fixed at some point… but after nearly 2 years of using the product without much visible improvement my expectation of that happening is rather low.

The Yorba non-profit recently started a fundraiser to work on the next edition of Geary, an email client which I hope will fill the niche that I’m talking about. It’s pretty rough around the edges still, but even at this early stage the conversation view is beautiful and more or less exactly what I want. The example of Shotwell (their photo management application) suggests that they know a thing or two about creating robust and useable software, not a common thing in this day and age. In any case, their pitch was compelling enough for me to donate a few dollars to the cause. If you care about having a great email experience that is completely under your control (and not that of an advertising or product company with their own agenda), then maybe you could too?

Syndicated 2013-04-20 03:02:17 from William Lachance's Log

A visit to the Montreal Zen Center

The Road to the Montreal Zen Center

So for a bit of a departure from the usual technical content, a personal anecdote. I went to the Montreal Zen Center today for a workshop, which was a most illuminating experience. I’d been pretty fascinated with the idea of zen for a while (see this post of mine from 2006, for example) but was pretty stuck on how to put it into practice (aside from being sure it was something you had to live). So, this was a step in that direction. After having gone to it, I wouldn’t say I’ve figured anything out (in fact I’m more confused than ever), but I would say one thing with conviction: this is the way to learn more.

It was pretty simple stuff: exactly how they describe on the web page I linked to. A short verbal introduction on some of the ideas of zen, then a tea break, then instruction on how to begin practising meditation, another tea break (this time with biscuits), then actually practising meditation, then question & answer about the meditation. It doesn’t really sound like much, and it wasn’t. But nonetheless I can’t stop thinking about the experience.

As far as I can gather, the “revelation” offered by Zen Buddhism is simple: our existence as separate, unique beings is an illusion of the mind. This illusion makes us suffer. However, it is possible with practice to overcome this illusion and realize your true nature as being one with the world. I’m probably butchering it a little bit by writing about it in this way, to a certain extent that’s me, but in another way it’s rather unavoidable since in a way the concepts are beyond words (since words imply a dualism). Regardless, the important thing isn’t to grasp zen intellectually, but to come to a natural understanding through the practice of meditation (aka “the practice”).

And on that note, the meditation is austere and almost certainly less than you’d expect. There is no prayer and very minimal ritual. Just a very minimal breath counting exercise conducted in a seated posture for 20 minutes, followed by a short walking exercise that lasts 5 minutes, then repeating the breath counting exercise for another 20 minutes. For its utter simplicity, I found it incredibly difficult. I imagine like anything with weeks, months, years of practice it (and the variations of it that experienced practitioners use where they meditate on koans) it would become easier.

I’m still giving thought on whether I want to take the next steps with them and begin a regular meditation practice. It sounds like really hard work (self meditation practice 6 days a week by yourself, plus regular visits to the zen center), which brings up the question: why do you want to do this? There’s some kind of weird contradiction between realizing that you as a self don’t really exist and committing yourself radically to this kind of practice. The only thing I can call it would be a “leap of faith”. My current thinking is that I’m not quite ready for that right now, but maybe in a while. For now I think I’m pretty happy going to yoga a few times a week and living my sham of a human existence. ;)

Syndicated 2013-03-24 20:47:18 from William Lachance's Log

Eideticker: Limitations in cross-browser performance testing

Last summer I wrote a bit about using Eideticker to measure the relative performance of Firefox for Android versus other browsers (Chrome, stock, etc.). At the time I was pretty optimistic about Eideticker’s usefulness as a truly “objective” measure of user experience that would give us a more accurate view of how we compared against the competition than traditional benchmarking suites (which more often than not, measure things that a user will never see normally when browsing the web). Since then, there’s been some things that I’ve discovered, as well as some developments in terms of the “state of the art” in mobile browsing that have caused me to reconsider that view — while I haven’t given up entirely on this concept (and I’m still very much convinced of eideticker’s utility as an internal benchmarking tool), there’s definitely some limitations in terms of what we can do that I’m not sure how to overcome.

Essentially, there are currently three different types of Eideticker performance tests:

  • Animation tests: Measure the smoothness of an animation by comparing frames and seeing how many are different. Currently the only example of this is the canvas “clock” test, but many others are possible.
  • Startup tests: Measure the amount of time it takes from when the application is launched to when the browser is fully running/available. There are currently two variants of this test in the dashboard, both measure the amount of time taken to fully render Firefox’s home screen (the only difference between the two is whether the browser profile is fully initialized). The dirty profile benchmark probably most closely resembles what a user would usually experience.
  • Scrolling tests: Measure the amount of undrawn areas when the user is panning a website. Most of the current eideticker tests are of this kind. A good example of this is the taskjs benchmark.

In this blog post, I’m going to focus on startup and scrolling tests. Animation tests are interesting, but they are also generally the sorts of tests that are easiest to measure in synthetic ways (e.g. by putting a frame counter in your javascript code) and have thus far not been a huge focus for Eideticker development.

As it turns out, it’s unfortunately been rather difficult to create truly objective tests which measure the difference between browsers in these two categories. I’ll go over them in order.

Startup tests

There are essentially two types of startup tests: one where you measure the amount of time to get to the browser’s home screen when you explicitly launch the app (e.g. by pressing the Firefox icon in the app chooser), another where you load a web page in a browser from another app (e.g. by clicking on a link in the Twitter application).

The first is actually fairly easy to test across browsers, although we are not currently. There’s not really a good reason for that, it was just an oversight, so I filed bug 852744 to add something like this.

The second case (startup to the browser’s homescreen) is a bit more difficult. The problem here is that, in a nutshell, an apples to apples comparison is very difficult if not impossible simply because different browsers do different things when the user presses the application icon. Here’s what we see with Firefox:

And here’s what we see with Chrome:

And here’s what we see with the stock browser:

As you can see Chrome and the stock browser are totally different: they try to “restore” the browser back to its state from the last time (in Chrome’s case, I was last visiting taskjs.org, in Stock’s case, I was just on the homepage).

Personally I prefer Firefox’s behaviour (generally I want to browse somewhere new when I press the icon on my phone), but that’s really beside the point. It’s possible to hack around what chrome is doing by restoring the profile between sessions to some sort of clean “new tab” state, but at that point you’re not really reproducing a realistic user scenario. Sure, we can draw a comparison, but how valid is it really? It seems to me that the comparison is mostly only useful in a very broad “how quickly does the user see something useful” sense.

Panning tests

I had quite a bit of hope for these initially. They seemed like a place where Eideticker could do something that conventional benchmarking suites couldn’t, as things like panning a web page are not presently possible to do in JavaScript. The main measure I tried to compare against was something called “checkerboarding”, which essentially represents the amount of time that the user waits for the page to redraw when panning around.

At the time that I wrote these tests, most browsers displayed regions that were not yet drawn while panning using the page background. We figured that it would thus be possible to detect regions of the page which were not yet drawn by looking for the background color while initiating a panning action. I thus hacked up existing web pages to have a magenta background, then wrote some image analysis code to detect regions that were that color (under the assumption that magenta is only rarely seen in webpages). It worked pretty well.

The world’s moved on a bit since I wrote that: modern browsers like Chrome and Firefox use something like progressive drawing to display a lower resolution “tile” where possible in this case, so the user at least sees something resembling the actual page while panning on a slower device. To see what I mean, try visiting a slow-to-render site like taskjs.org and try panning down quickly. You should see something like this (click to expand):

Unfortunately, while this is certainly a better user experience, it is not so easy to detect and measure. :) For Firefox, we’ve disabled this behaviour so that we see the old checkerboard pattern. This is useful for our internal measurements (we can see both if our drawing code as well as our heuristics about when to draw are getting better or worse over time) but it only works for us.

If anyone has any suggestions on what to do here, let me know as I’m a bit stuck. There are other metrics we could still compare against (i.e. how smooth is the panning animation aka frames per second?) but these aren’t nearly as interesting.

Syndicated 2013-03-20 22:29:20 from William Lachance's Log

Documentation for mozdevice

Just wanted to give a quick heads up that as part of the ateam’s ongoing effort to improve the documentation of our automated testing infrastructure, we now have online documentation for mozdevice, the python library we use for interacting with Android- and FirefoxOS-based devices in automated testing.

Mozdevice is used in pretty much every one of our testing frameworks that has mobile support, including mochitest, reftest, talos, autophone, and eideticker. Additionally, mozdevice is used by release engineering to clean up, monitor, and otherwise manage our hundred-odd tegra and panda development boards that we use in tbpl. See sut_tools (old, buildbot-based, what we currently use) and mozpool (the new and shiny future).

Syndicated 2013-03-11 13:53:57 from William Lachance's Log

Follow up on “Finding a Camera for Eideticker”

Quick update on my last post about finding some kind of camera suitable for use in automated performance testing of fennec and b2g with eideticker. Shortly after I wrote that, I found out about a company called Point Grey Research which manufactures custom web cameras intended for exactly the sorts of things we’re doing. Initial results with the Flea3 camera I ordered from them are quite promising:

(the actual capture is even higher quality than that would suggest… some detail is lost in the conversion to webm)

There seems to be some sort of issue with the white balance in that capture which is causing a flashing effect (I suspect this just involves flipping some kind of software setting… still trying to grok their SDK), and we’ll need to create some sort of enclosure for the setup so ambient light doesn’t interfere with the capture… but overall I’m pretty optimistic about this baby. 60 frames per second, very high resolution (1280×960), no issues with HDMI (since it’s just a USB camera), relatively inexpensive.

Syndicated 2013-03-08 23:10:31 from William Lachance's Log

134 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!