Older blog entries for connolly (starting at number 104)

13 Feb 2012 (updated 14 Feb 2012 at 03:39 UTC) »

Got my data back from Mint, thanks to GnuCash/mysql

My ideal personal accounting system would

  • support double-entry accounting, with budgeting, reports, and charts
  • have an open architecture with
    • an SQL back-end
    • a flat-file serialization of the data suitable for use with version control
  • integrate with the Web, both
    • allowing access from any machine with a web browser
    • syncing with banking web sites

After trying Mint for a year and a half, I realized that while Web integration is nice, it's no good without double-entry integrity. While GnuCash's UI isn't as nice as modern web apps, it lets me keep my data in SQL, which keeps my options open.

Before Mint, I used Quicken for decades. I stopped paying for updates after Quicken 2001 and hence lost bank syncing. But I did find a flat-file serialization suitable for use with version control (and no, QIF doesn't cut it. See my
March 2006 item). And while Wine continues to support Quicken 2001 after all this time, I don't have any API to update Quicken's store. So there's no going back to Quicken after Mint.

Mint has no concept whatsoever of double-entry accounting. It will give you a balance for your bank account at the beginning of each month and a list of income and expenses in between. You might think that the old balance + income - expenses = new balance. You would be wrong.

Mint fails to download a few credit card transactions on occasion, so it's unreliable auditing. It relies on the user to notice duplicate transactions, so it's unreliable for budgeting. As to the idea that Mint's categorization would save me work, Marc Hedlund put it this way in  Why Wesabe Lost to Mint :
I was focused on trying to make the usability of editing data as easy and functional as it could be; Mint was focused on making it so you never had to do that at all. Their approach completely kicked our approach's ass. (To be defensive for just a moment, their data accuracy -- how well they automatically edited -- was really low, and anyone who looked deeply into their data at Mint, especially in the beginning, was shocked at how inaccurate it was. The point, though, is hardly anyone seems to have looked.)
So I had to double-check the categorization. And since they lack support for any sort of transaction reconciliation, I had to cobble together something out of their tags to keep track of which transactions I had already reviewed. And sometimes, Mint just spontaneously threw away my work and changed the categories anyway. I know this because I carefully exported all my transactions in CSV format after each significant session and reviewed the diffs before checking them in to a version control repository.

The UI for splitting transactions is incredibly tedious. And once you have split a transaction, you can no longer search for the transaction by the total.

I had to resort to a Google docs spreadsheet to make up for limitations of Mint's budgeting. You can only budget for the current month. No longer term planning, and no retrospective changes to the budget. On November 31, you don't have all your spending info for November, since transaction data takes a few days to flow through banks and credit card systems. But on Dec 1st, Mint will no longer let you re-allocate budget funds between November and later months. As if plans were the important thing. "Plans are worthless, but planning is everything." -- Eisenhower

Mint has a notes field, but won't let you search them and trains you not to use them by deleting your work if you change any other field in the transaction.

I was willing to risk giving them my bank passwords, since I audit everything pretty carefully, but their security story is a boldface lie:

Mint is a "read-only" service. You can organize and analyze your finances, but you can't move funds between–or out of–any account using Mint. And neither can anyone else.
They had my bank passwords to download transaction data. They could do anything I could do at my bank web site. They promise not to, but to say they (or anyone who hacks their system) cannot move funds is just a lie.

So enough is enough.

I went into mad mode over the holiday break, first exploring a greasemonkey userscript:

@description Mint: I want my data back

I was pleased to find that  SQL support in GnuCash had matured as of the Dec 2010 release of version 2,4, and the SQL structure that gnucash uses is quite straightforward: accounts, transactions, splits, etc. Using guuids instead of integers for primary keys is somewhat novel but works OK. Note that the GnuCash string form of a uuid has no '-' characters, so in  mysql, I use replace(uuid(), '-', '').

I went back to the last comprehensive financial snapshot that I trusted, i.e. my last quarterly balance sheet from Quicken before the Mint experiment. I didn't load the decade+ of flat-file transaction data that lead up to that point, but I'm confident I could if I wanted to. For now, I just created an equity account for "Quicken transition" and used it to reproduce the balance sheet.

Since I didn't trust Mint to correctly enumerate transactions, I used OFX from my financial institutions to fill in the transaction information for the past year, reconciling statements as I went. (After getting the initial balance sync'd, reconciling statements was trivial, aside from glitches in my understanding of how GnuCash's OFX import UI worked.)

Then I sync'd the categorization info from Mint with GnuCash. While much of it was a one-time bulk import, running both systems in parallel for a short time was an important goal. This would require stable transaction identifiers from Mint, something they don't provide in their CSV export. While Mint doesn't advertise an API, fortunately, it was straightforward to reverse-engineer the way their Ajax client gets transaction data: mcc.py, my Mint cloud client, is only 100 lines of python.

I put some effort into trying to reproduce Mint's .csv export using my GnuCash database, but reached a point of diminishing returns. I do maintain the mint_re_export SQL view for version control purposes. I also discovered a version-control-friendly way to back up the whole mysql database:

$ mysqldump -u $LOGNAME --skip-dump-date --tab=$BAK_DIR -p $DB_NAME

Beware: mysqldump --tab defaults timezones to UTC but mysqlimport uses local time, with no TZ choice. The work around: set global timezone=UTC before mysqlimport.

Also, if you use Ubuntu, like I do, and you don't specifically authorize mysql to write there, apport will stop it and you'll get a mysterious: (Errcode: 13) when executing 'SELECT INTO OUTFILE' You need to edit /etc/apparmor.d/local/usr.sbin.mysqld and add a line /bak/dir/** rw, .

One of the real tests of the results is doing my 2011 tax return. So far, I haven't had to log back in to Mint, though I have worked around shortcomings in the GnuCash UI using hand-crafted SQL or grep on the .csv export from Mint a few times.

Highlights from the changelog include:

152:955de3fe6de7 2012-01-16 budget loads into gnucash DB
151:3945105a6728 2012-01-16 budget_sync.py groks my budget spreadsheet
145:d1854d4ef26c 2011-12-31 handle split transactions using mint parent/child info rather than guessing
144:bbd55121161f 2011-12-31 more straightforward account sync between mint and gnucash
141:48669cd9ec01 2011-12-30 oops; don't exclude the id column; that's the _whole point_!
140:76830ffc5cb2 2011-12-30 trx_explore supports mysql as well as sqlite; parses amount straightforwardly
139:42363cb4e1e0 2011-12-30 trx_explore with date handling loads thousands of mint transactions
137:dc26b4e483c6 2011-12-30 mint client fetches all transactions
135:503f9b7ef4af 2011-12-29 more matching work for credit cards
134:f9f7358da70c 2011-12-29 - incremental matching
133:2b463e2e73ff 2011-12-29 mint_re_export view is mostly working
130:023bbfc79025 2011-12-29 merge split transactions from mint into gnucash/OFX
129:f7301444308f 2011-12-29 updated OFX checking account data w.r.t. mint categorization work
127:2c0312b96f58 2011-12-28 figured out how to import mint accounts into gnucash DB
117:26fe6a26a345 2011-12-25 matching worked for 100 transactions (warnings/logging tamed)
110:1af3fef89487 2011-12-25 created SqlAlchemy object from JSON data
109:4fa21822a6c9 2011-12-24 explore gnucash sqlite file
106:80134e7a8730 2011-12-24 JSON dump of Mint transaction data
105:6d5bb42201c7 2011-12-24 got access to the transaction data
103:a4389fd1fcf6 2011-12-22 mint greasemonkey exploration (bookmarklet looks easier)

Syndicated 2012-02-13 19:08:00 (Updated 2012-02-14 03:26:41) from Dan Connolly

22 Jan 2012 (updated 13 Feb 2012 at 16:40 UTC) »

Remembering OS-9 on the CoCo

During an annual purge of old file boxes, I came across my 5 1/4 CoCo disks. Much of what I know about unix and linux actually dates back to OS-9 on the CoCo:
Even on the CoCo, a quite minimalist hardware platform, it was possible under OS-9/6809 Level One to have more than one interactive user running concurrently (for example, one on the console keyboard, another in the background, and perhaps a third interactively via a serial connection) as well as several other non-interactive processes. -- OS-9 - Wikipedia 
I wrote a shell in assembler; I ran across a hardcopy of the source a week or so ago. I wonder if the source is on these floppies. I made a copy on CD a few years back before I de-commissioned my last 5 1/4 disk drive.

Syndicated 2012-01-22 21:19:00 (Updated 2012-02-13 15:59:39) from Dan Connolly

There’s a Better Way to Build a Smart TV | The Official Roku Blog

This Roku Streaming Stick looks like a pretty good balance between the simplicity of integration and the upgradeability of componentization.
It makes me question my recent strategy of getting a really inexpensive TV (Haier L32D1120 32-Inch 720p LCD HDTV, Black on sale for $200) and streaming Blu-ray player (Panasonic DMP-BD75 Ultra-Fast Booting Blu-ray Disc Player $60). The Blu-ray player does Netflix pretty well, but the TV doesn't have the new MHL HDMI interface.

Syndicated 2012-01-21 08:34:00 (Updated 2012-01-21 08:34:59) from Dan Connolly

A big thanks for Web-iPhoto!

My wife does a photo shoot with the boys for the Christmas card each year. I wanted to share a digital copy of the photo, but our family photo archive is a mess, with N iPhoto albums on M macs and K backups on X linux boxes.

I know iPhoto is just JPG's and sqlite underneath, so it kills me that I can't just get at the photos with a web browser. I could code something up myself, but surely somebody has done it before, no? I've looked without luck before, but I guess I was using the wrong search terms. Today when I wished for "iphoto sqlite web server", lo! Merry Christmas to me!


Thank you, Dmytro Kovalov!

It works great on a huge iPhoto library backed up on this linux box.

Here's hoping I can install it on the macs in the house running various versions of OS X. I have lots of experience with python on macs, but not so much ruby. I sure hope I don't have to install XCode.

Syndicated 2012-01-03 00:23:00 (Updated 2012-01-03 00:27:03) from Dan Connolly

Capability Security in E, coffescript, python, dart, and scala

A couple months ago, I inherited some Java code and took on the task of fixing a bug in it. The bug turned out to be a consequence of a silent failure; eek! And there were precious few tests and no way to test the parts without being connected to LDAP servers and SQL databases and such. This started me on an exploration of current best practices in testing. And since the job of this code was policy enforcement around patient data, I could finally justify getting my hands dirty with capability-based security. I discovered, as many others have, that both testability and security are well served by some of the same basic object-oriented techniques.

Dependency injection frameworks always smelled like overkill to me, but after watching Miško Hevery on testability, I was convinced. If you're in the mood for text rather than video, see his Guide: Writing Testable Code. Basically, instead of having some policy enforcement object constructor call an LDAP connection constructor, the policy enforcement object takes the LDAP connection as a constructor argument. "Don't call us; we'll call you" is a handy mnemonic. This lets you substitute a mock LDAP connection for testing.

It also forms patterns of cooperation without vulnerability.

For example, take a look at the simple money example in E and the underlying sealer/unsealer pattern.

I have been using these as an exercise to explore some of the recent programming language developments:

The coffeescript translation seems completely natural, to me. Given the right static scope (i.e. without most of the JavaScript standard library), I think it has the same security properties as the E version. And the E idioms seemed to translate quite directly.

Python has not only the API authority issues, but also untold introspection loopholes. Plus, I had to kludge around read-only closures and no-assignment-in-lambdas; and while simulating E's method suite idiom is not too ugly, tools like pyflakes don't recognize the results.

Dart is a big disappointment. Everywhere else I look, Google is pushing capability security. But Dart lacks nested classes, so translating E method suites results in something that is only vaguely recognizable, let alone comprehensible.

Scala works reasonably well. The Java implementation of sealing relies more on  strong typing than the object graph for rights amplification; I might want to think that over some more. Also, It's a little boring to spell out the types. I might have to try it in Haskell. But on the other hand, as Brendan Eich observes:
Dynamic languages are popular in large part because programmers can keep types latent in the code, with type checking done imperfectly (yet often more quickly and expressively) in the programmers’ heads and unit tests, and therefore programmers can do more with less code writing in a dynamic language than they could using a static language.
The balance between static and dynamic languages also shows up in development tools. I had the eclipse with the Joe-E verifier, maven, and mercurial working all together at home one evening. The code really does just about write itself at that point. But when I tried to reproduce it at work, I got so frustrated that I retreated to emacs and python and looking up function arguments manually. The python version of the project has gotten complex enough that I'm starting to miss some of the whole-program consistency that Java tools give, but I'm getting by with a bottom-up approach: flymake, doctest, and the like.

Syndicated 2011-11-23 22:44:00 (Updated 2011-11-23 23:57:11) from Dan Connolly

Medical Informatics, Peer Review, and Open Access

Three issues of JAMIA just arrived, weighing not just on my desk but also on my mind: success is defined by my peers in my new field, medical informatics, as publication in a journal where the readers have to pay for access. After fifteen years as an Open Web advocate, this grates on me.

But I see that change is already underway. While JAMIA is the top journal that I hear about in the office so far, a quick trip to Wikipedia shows that it's second in impact to an open-access journal: Journal of Medical Internet Research.

Syndicated 2011-11-21 14:35:00 (Updated 2011-11-21 14:55:21) from Dan Connolly

Secure Mashups: CSRF-resistent alternatives to WebID

I think WebID is headed in the wrong direction. It separates authorization from authentication, which is widely believed to be a good practice, but proves spectacularly bad practice when it leads to cross-site request forgery.  I have tried to explain my misgivings to the WebID proponents, but I didn't have much in the way of an alternative to suggest. Until today, when I found Sitelier and Belay Research.

While evaluating Spring Security today, I went looking to see if it its role-based architecture is in any way compatible with capability-based approaches and I found this, from the Sitelier guys:

In our view, the web right now is backwards: users have accounts on dozens of websites, all with their own logins and passwords, and our content and personal information is scattered all over the web, out of our control. Sitelier turns the situation around: when you install an app, you're effectively creating an account on your site for the app, which can then save its data (your data) there, so all your online information can live in one secure location that you control.
Replies pointed out related work such as Belay Research and emphasized usability research. Indeed, my understanding since at least as far back as my Dec 2008 post is that the capability approach is the necessary and sufficient solution to the problem of secure mashups; the only question is: given the worse-is-better tendency in software deployment, is there any chance we can move the state-of-the-art that far?

There are also some market forces to consider. If I host my own email, how do get sub-second search a la ad-powered gmail?

Syndicated 2011-07-26 22:26:00 (Updated 2011-07-26 22:26:48) from Dan Connolly

The Voters First Pledge: what do my elected representatives have to say?

I find politics so distasteful that I rarely get directly involved, but on June 4, after I watched Inside Job, I felt compelled to exercise my right to petition government for redress of grievances. I wrote the following to my elected representatives, Moran and Roberts, via opencongress:
Representative democracy in America has clearly been corrupted by big-money interests.

The Fair Elections Now Act S.750 and the The Voters First Pledge look like reasonable steps, to me.

I don't see you among the supporters.

Please sign the pledge, or at least explain to me your position on the bill.

Thanks for your consideration and your service to our country.


Daniel W. Connolly
I got automated acknowledgement of receipt from both of their offices, but no response since. I don't expect more than a form letter. How long does it take to send one of those? Over a month, evidently.


Syndicated 2011-07-09 17:12:00 (Updated 2011-07-09 17:12:43) from Dan Connolly

Eliminating trackname collisions in multi-CD audiobook with mutagen

I wanted to listen to an audiobook on my android phone, so I ripped it (using banshee) and copied the tracks, but "track 1" from disc 2 overwrote "track 1" from disc 2.

So this little ditty uses mutagen to rename them to "Disc 01 Track 01" and "Disck 02 Track 02" respectively.

I have since discovered that ripping this audiobook with iTunes (which consults Gracenotes where banshee consults musicbrainz) yields track names like 1a, 1b, 1c, ..., 2a, 2b, 2c, ... .

import sys
import os

# http://code.google.com/p/mutagen/wiki/Tutorial
import mutagen

def fix(album):
    for dirpath, dirnames, filenames in os.walk(album):
        for track in filenames:
            audio = mutagen.File(os.path.join(dirpath, track))
            print audio['album'], audio['title']
            t = "Disc %02d Track %02d" % (int(audio['discnumber'][0]),
            audio['title'] = t

if __name__ == '__main__':
    album = sys.argv[1]

Syndicated 2011-07-07 13:14:00 (Updated 2011-07-07 13:14:10) from Dan Connolly

Trying to replace delicious, pinboard.in, and catch with diigo

I keep trying out one more cloud based task/time/knowledge management tool, hoping it will replace several of my too many others. While browsing around the Chrome store looking for tools that sync with android, I discovered diigo. The highlight feature is really slick! I've been hoping for that feature as far back as the Amaya papers and talks from 2000. Plus, it does bookmarking and note taking. But it's not as smooth as I'd like. I wonder if that's inherent in the attempt to do so many things.

A pleasant surprise from diigo: the chrome search bar

Chrome merged the address bar and the search field a while ago. The diigo chrome extension notifies you when you search for things that match items in your library, so you don't have to build a new habit.

Why diigo hasn't replaced pinboard for bookmarking, twitter archiving

The original delicous bookmarklet clearly hit the sweet spot for bookmarking:
  1. Hitting the bookmarklet brings up a little pop-up with the URL and title filled in for you
  2. add your own note... maybe a particularly interesting quote/excerpt (optional)
  3. add some tags
  4. Hit enter/save and you're back to your web page, with the pleasant feeling that your bookmark is stored safely in the cloud (and you can get it back via their export service and/or API)
There were some lightweight features that improved the experience: auto-complete of tags and auto-suggested tags from the crowd. Then the features started getting heavy, going beyond the
critical response times, and on a tip from Gerald, I started migrating my delicious bookmarks to pinboard.in. (This was long before "the vice president of bad decisions at yahoo" threw in the towel.)
The diigo bookmarklet has two critical problems:
  • It takes over the whole page (and takes too much time doing so). So you can't consult the page as you add your notes.
  • When you hit save, it takes you to your library rather than back to the page you were on.
It was the speed of pinboard that convinced me to switch from delicious, not so much the "anti-social" aspects; I did enjoy the collaborative aspects of delicious, until they went overboard and made it too painful to search my own bookmarks. I was surprised to see so much of my community using twitter for link sharing: how do they ever find the bookmarks they made?! Twitter has the attention span of a gnat; it has no interest in helping you find a bookmark you made 2 years ago. Pinboard solved that problem by adding comprehensive twitter archiving to their snappy search offering.
Diigo has a twitter archive feature, but
  • It archives only favorites, not tweets I wrote, unless pay a monthly fee. (Pinboard isn't free, but the fee is one time.)
  • It loses critical context, i.e. who wrote the tweet.
  • It lumps tweets in with notes I wrote in places like their Quick Notes chrome application
That brings me to the goal of using diigo for task management.

Why Diigo hasn't replaced Catch for gtd-style collecting

Catch supports gtd-style collecting and processing really well:
  1. With their android widget or shortcut, touch to start adding a note.
  2. Type a few words to capture what's on my mind... or more often: hit the speech input button and say a few words.
  3. Hit save, knowing catch will sync with the cloud momentarily.
I do most of my processing via catch's web interface, when I have the full bandwidth of a big screen, keyboard, and fast network. But sometimes when I have some time to kill, I use the catch android app to process notes.
I hope the diigo Powernotes android app gets there. Both catch and diigo let me log in using my google apps accounts, but:
  • Early releases required manual sync, which completely defeated the purpose of getting things off my mind, since I had to think about whether I had sync'd or not. I'm glad that's fixed.
  • Catch has "pin note to homescreen," which is handy for journaling; PowerNote doesn't seem to have anything like that. "Pin list to home screen" would be handy.
  • Saving a note without a title fails silently. This is particularly painful since the speech-to-text note taking feature defaults to an empty title. Throwing away the knowledge I just entrusted to it is pretty much the unforgivable sin for a knowledge management app. The feedback feature is really simple and the developer acknowledged my feedback right away, though, so perhaps I'll give it another chance. 
  • I can't find an easy way to list all (and only) the thoughts I collected. It supports filing notes into lists, and one of the options is "Recent notes," but that's a tease: there is no "Recent notes" when I go to view my lists. Diigo bookmarking supports the "read later" bit a la pinboard, but I don't see how to set that bit on notes. It would be handy to have a unified "read later" collection of notes/bookmarks/highlights.

Diigo for shopping? What was I thinking?

I sure wish Amazon helped me record why I'm adding to my wishlist, e.g. who recommended it, which features or review comments I'm particularly interested in. I can annotate items if I switch to viewing the whole list, but the first thing Amazon does after I hit "add to wishlist" is distract me from recording what's on my mind with offers for other products. So I did a little research on home theater systems using diigo. But while shopping does involve research, there's really a lot more to it, and Amazon is a huge machine finely tuned to help with the whole process. Amazon's universal wishlist button helps some. Besides, as we learn from gtd, the most important thing to do after capturing a thought is to put it in context where you will next act on it. And for online shopping, that place is Amazon more often than not.

Diigo community and tools

The diigo community and development team appeals to the hacker, the researcher, and the closet-librarian in me. I haven't found many familiar names/faces in the diigo community yet. The business model (freemium, with a focus on the education market) seems sensible to me, but I don't have much confidence in my ability to pick viable web businesses. (I've been involved in the web pretty much since it started; I wonder if I'd be ahead or behind if I'd invested in the web businesses I liked when I learned about them...)  With a new owner for delicious, it may be time to take another look. The delicious crowd is large enough to display some wisdom in, for example, finding interesting new python programming resources. And I once discovered that a colleague subscribed to my family movie bookmarks.
Diigo says they support the same export format as delicious, but I don't see how I can get all my data back that way, since delicious has no concept of highlighting nor lists. I see a mention of annotations in the diigo API; perhaps all the structure is captured there.

Syndicated 2011-05-16 18:32:00 from Dan Connolly

95 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!