rbp is currently certified at Journeyer level.

Name: Rodrigo Pimentel
Member since: 2000-03-15 21:20:56
Last Login: 2009-07-28 23:11:26

FOAF RDF Share This

Homepage: http://isnomore.net

Notes:

I've been a Linux and Free Software user and advocate since 1995 (and developer since a bit later). Currently, my main activity is programming (in Python, whenever I can). I was president of LinuxSP, a Linux Users Association, throughout its existence. There's more about me and stuff I'm working at (including a blog) at http://isnomore.net

Recent blog entries by rbp

Syndication: RSS 2.0

Doctor Who: where do I start?

As we approach series 8, a friend asked, "So, how do I watch Doctor Who? Do I start from the beginning? Where is the beginning? What do I do??"

This is a common question which I myself asked once, and getting an answer (from a Whovian friend) helped not be overwhelmed by over 50 years of episodes. So here is my take on it.

I'm assuming that the intent here is to watch past episodes to get a feel for it and understand context. You could of course simply start with the new, 8th season and move on from there, but there is so much good Who to watch! Also, Doctor Who builds on itself quite a lot, and, while usually not required, it's a lot more fun to get the references.

First of all, a quick summary of terminology and broadcasting history: Doctor Who was first broadcast in 1963 (with, obviously, the First Doctor), and went on somewhat steadily until the 90s (with the 8th Doctor appearing in a one-off movie). We'll call it "the original series", or "the old series". It then went on a hiatus for about 10 years. In 2005 the series resumed, with the 9th Doctor, and has been going on uninterrupted since. This is "the new series", or "the 2005 series".

Note that, in the Doctor Who universe, it's all the same Doctor. Same storyline and everything. The 2005 series was not a "reboot", not a "remake", not a "reimagining". It's just new episodes of the same series.

So, my general advice is: start somewhere in the new series until you've watched the whole of it it. Then, if you like it, go back and watch the old series (which has quite a different pace, and not everybody gets into it).

Having said all this, there are a few different options of specifically where to start, depending on your level of commitment:

Low commitment, or "Let's see what this is about"

If you just want to watch one episode and see what the fuss is about, I recommend watching "Blink" (10th Doctor, episode 3.10 of the 2005 series). When I first watched it, I played it again as soon as it was over :)

Alternatively (or following that), you can watch the double episode "Silence in the Library / Forest of the Dead" (10th Doctor, episodes 4.9 and 4.10). The final scene is slightly silly, but overall it's excellent.

More alternatively (or following the two above), watch the 11th Doctor's first episode, "The 11th Hour" (episode 5.1). It works particularly well as an introduction, as both the Doctor and the production team are new.

Finally, even more alternatively, and to mention an episode with the 9th Doctor (remember, the first one to appear on the new series), you can watch "The Empty Child / The Doctor Dances" (episodes 1.9 e 1.10). An excellent sequence, with an excellent - and, in my opinion, underrated - Doctor.

At any point in this sequence (including at the end of it), you can decide you like Doctor Who after all, and jump straight to one of the following:

Small marathon, or "Let's see one Doctor from beginning to end"

If you're in the mood for a more uninterrupted marathon, start with the 11th Doctor (series 5), and watch it all the way through until season 7. Remember not to skip specials, including the 50th-year one.

Alternatively, start with "Silence in the Library" (episode 4.9, mentioned above) and watch it from there on. This means a few episodes at the end of season 4, a few specials that comprised the 2009 "series", and then the 11th Doctor. I love this final sequence of the 10th Doctor's episodes.

Watch it all the way to the end, then possibly any new episodes of series 8 that might have already aired (with the new 12th Doctor), then go back to 2005 and watch all that's left, in order.

Or...

Full Who Marathon, or "I want it all!"

If you think you want to watch it all anyway, and are willing to forgive a period of some inconsistency as they adjusted into the new series (but even then with excellent episodes, and, again, in my opinion an excellent Doctor), just go with series 1 of the 2005 series. Watch it all, rewatch a few, memorise lines, from then on it's up to you :)

Now, if you'll excuse me, after all this Who talk I feel like I have a marathon to start...

Syndicated 2014-08-22 16:26:45 from Bits of rbp - isnomore.net

The evolution of a Lego Mindstorms Mars Rover

Last weekend I (oh, and about 9000 other people) participated in NASA's International Space Apps Challenge. My team worked on the Curiosity at Home challenge, split in three parts: translating NASA's SPICE data format to a more readable form, parsing that into commands to the rover, and building a representation of the Curiosity Mars rover itself.

The code is available on Github. It still needs some work, but, you know, hackathon.

I worked on building the rover, using Lego Mindstorms, and it proved to be trickier than I had anticipated. Most times it would look great, but then refuse to steer or even move at all, as soon as some weight was put onto it. And by weight I mean the NXT brick, which we felt was an indispensable component of the rover.

I'm in the process of disassembling the rover and taking photos of it, so that I can then rebuild it and document the build steps. But, in the meantime, a quick recap of how it evolved throughout the (mostly sleepless) weekend. Unfortunately, I don't have pictures of every intermediate version.

Iteration 2

By this point, we had an initial, flimsier version. But I wanted more robustness, as well as proper front-wheel steering. At this point, the motor powered 4 wheels. Here it is (with Frits, as a bonus):

Lego rover, iteration 2

Iteration 6. Probably

Yeah, I don't remember exactly which iteration these pictures correspond to. Have I mentioned "sleepless"?

Anyway, you'll notice this version is shorter, meaning less strain on the middle section, better distribution of weight and, we hoped, better steering. By then, we had already moved to a motor powering only two wheels, and now we have finally started using two separate motors, one for each rear wheel.

Lego rover, iteration 6

Iteration 8, I think

Shorter, sturdier, and uses different rotations of the the back wheels for steering, as well as the front wheel gears.

I was disappointed with the middle wheels, by this point they were mostly just for show. But the deadline approached, and we had to make decisions.

Lego rover, iteration 8

Iteration 10, final version

Not too many changes from the previous iteration, mostly some incremental adjustments. This is what we presented, and worked reasonably well (all things considered).

Lego rover, iteration 10 (final)

Syndicated 2013-04-23 19:48:58 from Bits of rbp - isnomore.net

ASE is now SL4A! Upgrade (and here's how)!

So, SL4A r0 is out! This is good old Android Scripting Environment, or ASE (remember?), renamed to Scripting Layer for Android, but that's not all. The release also brings a lot of really cool features, such as:

  • Interpreters distributed on their own APKs. Plus the coolness factor of seeing "Python for Android" on the installer. Downloading them is bit weird, though, as it opens the browser and hitting the Back buttons takes you to the homepage, not back to AS4L.

  • Scripts themselves can be packed into an APK

  • Support for multiple scripts running simultaneously, on the background.

And a lot more. Do read the announcement.

So, yeah, head over to the project's page (BTW, why isn't it on the Market?) and install the new version.

However, if you already used ASE, you'll notice that the new package no longer replaces the old ASE installation. It installs as a different app. No problems so far.

But it also won't find your old scripts! Oops... SL4A now stores (and looks for) its scripts on /sdcard/sl4a, instead of the previously used /sdcard/ase.

Fortunately, that's very easy to fix, with a small shell script running on SL4A itself:

  # Just install this on your sl4a and run it once.
old="/sdcard/ase/scripts"
new="/sdcard/sl4a/scripts"
enough () {
    echo $1
    echo "Press ENTER to exit... "
    read foo
    exit $2
}

if ! cd "$old" ; then
    enough "Could not cd to old scripts directory $old. Sorry." 1
fi
if [ ! -d "$new" ]; then
    if ! mkdir "$new"; then
        enough "Could not create new scripts directory $new. Sorry." 1
    fi
fi
find . -type d -mindepth 1 -exec mkdir "$new/{}" \; 2>/dev/null
find . -type f | while read line; do
    if [ ! -e "$new/$line" ]; then
        echo "$line -> $new/$line"
        cp "$line" "$new/$line"
    fi

done
enough "Done." 0

ase_scripts_to_sl4a.sh

This basically just copies the files over (including from any subdirectories you might have), without overwriting any files that already exist on the SL4A scripts directory. You can install it from the QR Code on the right (click it for a larger version). I realise that I should put SL4A's new functionality to test and pack the script in an APK file, but I'm feeling a bit lazy for that right now. If enough people actually use the script, I'll give it a shot :)

I'm using shell because it comes with SL4A, so you don't have to download any specific interpreter. Also, I'm hardcoding the /sdcard path, since I believe SL4A doesn't expose the getExternalStorageDir() call. If your SD card's mount point is somewhere else, please adjust the script accordingly (or, if there's another commonly used path, let me know and I'll fix it here).

Enough for now. Let me know if this script has been useful, or if you have suggestions.

Have fun with SL4A!

Syndicated 2010-08-05 23:50:11 from Bits of rbp - isnomore.net

Automated Python testing with nose and tdaemon

If you're testing your code at all (and you are, right?), it's awfully convenient to automatically have your test suite run whenever something in your project changes. This is particularly handy if you're doing test-driven development (TDD), where you'll be writing a lot of tests and need immediate feedback on them.

With Python this is made easy with the help of nose and tdaemon.

In a hurry? Jump straight to the summary!

Installing

Both tdaemon and nose can be installed via Cheeseshop (or "PyPI", for the suits), using pip:

  sudo pip install nose tdaemon

(You might not need sudo, depending on your setup).

An interlude: pip versus easy_install

You can use easy_install instead of pip. It should work just fine, for what we're trying to do here. But do yourself a favor and switch over to pip. It's as simple as

  sudo easy_install pip

Another interlude: whither tdaemon?

If you google "tdaemon", the first result is a github tree from Bruno Bord, tdaemon's author. The version at Cheeseshop (yes, I'll keep calling it "Cheeseshop", damn it!) lists John Paulett as author and points back to the github repository as its home page. Both versions are almost exactly alike, except that the Cheeseshop one has a slight enhancement (a command-line argument to ignore specific directories). John Paulett took tdaemon, added that feature and packaged it for Cheeseshop. So we'll use his version.

Getting cute notifications

So far, if you simply execute tdaemon on a terminal, it'll monitor the current directory and run nosetests whenever it detects a change. Which is fine, but I don't want to switch to the terminal all the time while I'm programming, if I don't have to. So let's arrange our environment so that we get visual alerts every time the tests are run.

On Mac OS X

I assume you already have Growl on your Mac. If you don't, install it, it makes your life easier (if you don't know, it should be at the bottom row on your System Preferences panel).

The NoseGrowl nose plugin provides Growl notifications. Unfortunately, NoseGrowl installation is currently broken: there is only an egg for Python 2.5 on Cheeseshop, and the source code on bitbucket has a bug on setup.py. But the latter is easy to fix:

  hg clone http://bitbucket.org/crankycoder/nosegrowl
cd nosegrowl/nose-growl
# This next command simply removes a specific line from setup.py
# If you want to, edit setup.py and remove it yourself
# (and blame OSX for not shipping GNU sed)
echo -e "/install_requires=\['py-Growl'\],$/d\nw" | ed setup.py
python setup.py install

By the way, NoseGrowl's author, Victor "crankycoder" Ng, told me a few months ago that he was looking for someone to take over for him. So, if you find the project useful, please consider talking to Victor and volunteering to maintain it. Might make him less cranky :)

On Linux (Gnome)

Michael Gundlach adapted NoseGrowl to use Gnome's notification system. His version is called NoseNotify and can be installed directly from Cheeseshop:

  sudo pip install NoseNotify

I don't know if there is a notification plugin for KDE. If you do, let me know and I'll add it here!

Putting it all together

Open a terminal window and cd to the root directory of your project (tdaemon recursively looks at everything down from there).

If you're on OS X, type:

  tdaemon --custom-args="--with-growl"

If you're on Linux, type:

  tdaemon --custom-args="--with-notify"

Since you're passing custom arguments to nosetests, tdaemon will ask you to confirm that this is the command you want to run. Simply type "y".

Now you can create a file called, say, "tests.py", add tests to it and watch what happens as you save it!

tdaemon-red

tdaemon-green

 

Summary

Install nose and tdaemon:

  sudo pip install nose tdaemon

On Mac OS X, install NoseGrowl:

  hg clone http://bitbucket.org/crankycoder/nosegrowl
cd nosegrowl/nose-growl
# This next command simply removes a specific line from setup.py
# If you want to, edit setup.py and remove it yourself
# (and blame OSX for not shipping GNU sed)
echo -e "/install_requires=\['py-Growl'\],$/d\nw" | ed setup.py
sudo python setup.py install

and run as:

  tdaemon --custom-args="--with-growl"

On Linux, install NoseNotify:

  sudo pip install NoseNotify

and run as:

  tdaemon --custom-args="--with-notify"

That's it. Happy coding!

Syndicated 2010-08-01 17:57:23 from Bits of rbp - isnomore.net

Yahoo! Open Hack Day Brasil 2010

On March 20th and 21st, Yahoo! Brazil brought us our second Open Hack Day. I'd been to the previous one, in 2008, and it was amazing! Our project even won at a newly-created category, aptly named "What the Hack?"

This year, I wanted once again to try a hardware hack, using whichever parts I could get my hands on. Not necessarily anything useful, though. That's what I love about the Hack Day. I can do useful stuff throughout the rest of the year :)

Image © brhackday, used with permission

Yahoo! Hack Days

Before the 2008 Hack Day, I had pretty much written off Yahoo! as a company that was no more. I wasn't even particularly interested on the event, and only decided to go at the last minute. Boy, what a change in perspective. Of course, Yahoo!'s São Paulo team has some very clever people. But, more generally, I was very impressed with the data-gathering tools that Yahoo! had started offering. YQL is simply fantastic. It perfectly captures what the Internet is about, data-wise (I'm not saying it's perfect, but it embodies the right spirit). I hadn't had that much geek fun in a long time. So you can probably tell my expectations were high for this year's edition. Could Yahoo! deliver?

Not to fret, they knew what they were doing. Just put some 250 hackers in a fish bowl, give them food, coffee, wifi (surprisingly good, some silly proxy restrictions excluded), show them Monty Python, and wait for it!

Our hack, and our hackers

Image © brhackday, used with permission

Even before the announcement of this year's Hack Day, I'd been toying with the (admittedly silly) idea of a firefighting robot, that lurked online waiting for people to report fires anywhere in the world. It would then bravely roll over to wherever the fire was, and put it out. Bravely.

Trouble is, it'd probably have to be one gargantuan robot. So I thought I'd settle for a more modest, Lego-built, Arduino-controlled one. With the Hack Day approaching, I suggested this project to a few friends and we created a Wave to discuss the idea. I was planning to use a box of old Lego pieces from my childhood (see mom? I told you I'd have eventually use those again!) and a couple of servo motors I had lying around, but Rodolpho brought his Mindstorms NXT into the picture, making the project much cooler (and, incidentally, much more manoeuvrable).

On Saturday morning I rode with Gola to the (really cool) auditorium of Senac University, where the previous Hack Day had already been hosted. There we met Rodolpho and Mobi, our original team. We had found out the day before that there would be a limit of 4 people on each team, this year. In 2008 our team had been comprised of... 12? 15? I never even knew. We'd simply started building weird, blinking stuff, and people had gathered around for the fun. We got, therefore, a bit disappointed with this edition's limit. But, since we weren't really expecting to win anything (and therefore disqualification wasn't an issue), we bent the rules a bit, and Werneck, Lucmult and Mauro hacked with us. Also, Aline joined us a bit later. Since Werneck and Lucmult eventually had to leave and didn 't return for Sunday, and Aline and Mobi didn't program, I think we were sort of in the clear. Technically. Sort of.

And then we started building.

What we built

The robot, in construction. Photo by alickel

The body of the robot ended up using mainly the Mindstorms parts. We had started building a larger, sturdier body with assorted Lego pieces on top of a rigid board, pulled by front-wheel drive with four wheels in the front and a loose trailing one. But, after a lot of testing, the loose wheel kept veering the robot out of track, so we rebuilt everything with a lighter, smaller frame, and a pair of caterpillars tracks. In hindsight, I think we should have kept the larger body (even if rethinking wheel traction), as it gave the robot more stability, which we would come to miss later. But we had to make a quick decision, and we worked with what we had. The NXT controller sat on the robot and communicated with an external server via bluetooth, relaying control input to the wheels and sending back odometry information.

This external server (actually, Rodolpho's notebook) continuously used YQL to query Twitter, Yahoo! Meme and news feeds (which I originally wanted to aggregate using Yahoo! Pipes, but we didn't have time for it), searching for people reporting fires - a simple string search, filtered by the Yahoo! Term Extraction API. Whenever there were reports, the server would send them through the Placemaker API to extract location information. It then determined which location in the world was in most urgent need of aid (by number of reports).

Now, the tricky part: the robot needed to be aware of a rectangular projection of the world onto the room, and to know its own location on it. I don't mean a visual projection. We did consider it, but realised that it would be unfeasible for the Hack Day. We toyed with the idea of printing (or drawing) a world map on large sheets and taping then to the ground, but the robot would surely slip, trip, or tear the sheets. So we settled for an abstract projection, and, based on information sent back from the robot, plotted its estimated current position in the "world", at each instant, using Yahoo! Maps and the Yahoo! Geocoding API (via geopy).

To locate itself, the robot relied on Mindstorms' odometry feedback, and on QR Code markers distributed around the room, read by an Android phone using Python (from the ASE) and the Barcode Scanner app. The Android phone sent the information encoded in each QR Code to a custom service built with Python's SimpleHTTPServer, which our server polled. The server then sent back new movement controls, according to the robot's current position and desired target. We tried to use Mindstorms' sonar and colour sensors, but they just wouldn't work reliably with the Python interface (which we needed to send commands programatically via bluetooth).

A happy robot. Image © codepo8 CC BY 2.0

No one likes an impersonal robot, and ours, accordingly, had a face in order to convey emotions. We used an XO laptop as the head, and defined that the robot would have pre-determined emotions depending on the situation: it would be "at rest" when there were no fires going on, "worried" when it was going towards a fire, and "happy" when it had put the fire out. A Python script running in the background continuously searched Flickr (using YQL) for expressions associated with each of these emotions, and downloaded a number of related pictures. Another script queried the robot's current emotional state (set by the server), and displayed an appropriate random subset of these pictures. For every few of them, we'd display a face that Aline had drawn specifically for that emotion, to make up for the fact that we couldn't be sure if the pictures would depict it (people give the weirdest tags and descriptions to their Flickr uploads...).

We originally meant to use an Arduino to control a servo that would squish some water from a syringe, thus putting out the fire once the robot got to its destination. But all that proved too much for a mere 24h, and we had to settle for a manual pump.

A camera-shy robot, and a sleepy presenter

Of course, we'd tested (almost) everything, and the robot worked (almost) perfectly. (Almost) Great. But, when it was time to go up on the stage, we hit a few snags.

First, there was only one large screen. Of course, we'd known that, but it hadn't occurred to us that people would need to see not only the robot (which was tiny, especially at a distance), but also the map showing its movement towards the fire, at several different moments. Yahoo! had predicted even such an eventuality (seriously, guys, kudos!) and provided us with a way to switch between our server's screen and the feed from a video camera which Aline used to film the robot. But I didn't manage to coordinate the switch properly, and we didn't get to show the animated map.

Also, we had already noticed that the XO weighed a lot compared to the rest of the robot (remember what I said about the previous, sturdier version?), and that, in order to steer, we needed to balance it carefully. I didn't, and the robot soon went out of its path.

Finally, we had used so many different technologies and APIs on this project, and most of them simply vanished from my mind when I started presenting! Note to self: next time, write some hints on the back of my hand...

All I have to say for myself is that I had slept for only about 1 hour since the previous morning, ours was one of the last projects to be presented, and by then I was very sleepy and barely able to react. I should have switched from the camera to the map more often, so people could see what was going on. I should have grabbed the robot when it went off track and restarted the demo. I should have asked my teammates for help when trying to list everything at work on the robot. Shame on me.

But, all in all, I think it was a great project, and I'm very proud of it. I'm especially proud of my team, you rock much more than I was able to show :)

And, next year, I promise to take a nap before presenting!

The Robot. Image © brhackday, used with permission

Syndicated 2010-04-05 22:26:24 from Bits of rbp - isnomore.net

10 older entries...

 

rbp certified others as follows:

  • rbp certified acme as Journeyer
  • rbp certified thiagom as Apprentice
  • rbp certified brain as Journeyer
  • rbp certified bruder as Apprentice
  • rbp certified morcego as Journeyer
  • rbp certified riel as Master
  • rbp certified godoy as Apprentice

Others have certified rbp as follows:

  • morcego certified rbp as Journeyer
  • bruder certified rbp as Journeyer
  • godoy certified rbp as Apprentice
  • rw certified rbp as Journeyer
  • maragato certified rbp as Apprentice
  • jarod certified rbp as Journeyer
  • minami certified rbp as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page