4 Nov 2013 mikal   » (Journeyer)

Comparing alembic with sqlalchemy migrate

In the last few days there has been a discussion on the openstack-dev mailing list about converting nova to alembic. Nova currently uses sqlalchemy migrate for its schema migrations. I would consider myself a sceptic of this change, but I want to be a well educated sceptic so I thought I should take a look at an existing alembic user, in this case neutron. There is also at least one session on database changes at the Icehouse summit this coming week, and I wanted to feel prepared for those conversations.

I should start off by saying that I'm not particularly opposed to alembic. We definitely have problems with migrate, but I am not sure that these problems are addressed by alembic in the way that we'd hope. I think we need to dig deeper into the issues we face with migrate to understand if alembic is a good choice.

sqlalchemy migrate

There are two problems with migrate that I see us suffering from at the moment. The first is that migrate is no longer maintained by upstream. I can see why this is bad, although there are other nova dependencies that the OpenStack team maintains internally. For example, the various oslo libraries and the oslo incubator. I understand that reducing the amount of code we maintain is good, but migrate is stable and relatively static. Any changes made will be fixes for security issues or feature changes that the OpenStack project wants. This relative stability means that we're unlikely to see gate breakages because of unexpected upstream changes. It also means that when we want to change how migrate works for our convenience, we don't need to spend time selling upstream on that change.

The other problem I see is that its really fiddly to land database migrations in nova at the moment. Migrations are a linear stream though time implemented in the form of a sequential number. So, if the current schema version is 227, then my new migration would be implemented by adding the following files to the git repository:

    184_implement_funky_feature.py
    184_sqlite_downgrade.sql
    184_sqlite_upgrade.sql
    


    In this example, the migration is called "implement_funky_feature", and needs custom sqlite upgrades and downgrades. Those sqlite specific files are optional.

    Now the big problem here is that if there is more than one patch competing for the next migration number (which is quite common), then only one patch can win. The others will need to manually rebase their change by renaming these files and then have to re-attempt the code review process. This is very annoying, especially because migration numbers are baked into our various migration tests.

    "Each" migration also has migration tests, which reside in nova/tests/db/test_migrations.py. I say each in quotes because we haven't been fantastic about actually adding tests for all our migrations, so that is imperfect at best. When you miss out on a migration number, you also need to update your migration tests to have the new version number in them.

    If we ignore alembic for a moment, I think we can address this issue within migrate relatively easily. The biggest problem at the moment is that migration numbers are derived from the file naming scheme. If instead they came from a configuration file, then when you needed to change the migration number for your patch it would be a one line change in a configuration file, instead of a selection of file renames and some changes to tests. Consider a configuration file which looks like this:

      mikal@e7240:~/src/openstack/nova/nova/db/sqlalchemy/migrate_repo/versions$ cat versions.json | head
      {
          "133": [
              "folsom.py"
          ], 
          "134": [
              "add_counters_to_bw_usage_cache.py"
          ], 
          "135": [
              "add_node_to_instances.py"
          ], 
      ...
      


      Here, the only place the version number appears is in this versions.json configuration file. For each version, you just list the files present for the migration. In each of the cases here its just the python migration, but it could just as easily include sqlite specific migrations in the array of filenames.

      Then we just need a very simple change to migrate to prefer the config file if it is present:

        diff --git a/migrate/versioning/version.py b/migrate/versioning/version.py index d5a5be9..cee1e66 100644 --- a/migrate/versioning/version.py +++ b/migrate/versioning/version.py @@ -61,22 +61,31 @@ class Collection(pathed.Pathed): """ super(Collection, self).__init__(path) - # Create temporary list of files, allowing skipped version numbers. - files = os.listdir(path) - if '1' in files: - # deprecation - raise Exception('It looks like you have a repository in the old ' - 'format (with directories for each version). ' - 'Please convert repository before proceeding.') - - tempVersions = dict() - for filename in files: - match = self.FILENAME_WITH_VERSION.match(filename) - if match: - num = int(match.group(1)) - tempVersions.setdefault(num, []).append(filename) - else: - pass # Must be a helper file or something, let's ignore it. + # NOTE(mikal): If there is a versions.json file, use that instead of + # filesystem numbering + json_path = os.path.join(path, 'versions.json') + if os.path.exists(json_path): + with open(json_path) as f: + tempVersions = json.loads(f.read()) + + else: + # Create temporary list of files, allowing skipped version numbers. + files = os.listdir(path) + if '1' in files: + # deprecation + raise Exception('It looks like you have a repository in the ' + 'old format (with directories for each ' + 'version). Please convert repository before ' + 'proceeding.') + + tempVersions = dict() + for filename in files: + match = self.FILENAME_WITH_VERSION.match(filename) + if match: + num = int(match.group(1)) + tempVersions.setdefault(num, []).append(filename) + else: + pass # Must be a helper file or something, let's ignore it. # Create the versions member where the keys # are VerNum's and the values are Version's.


      There are some tweaks required to test_migrations.py as well, but they are equally trivial. As an aside, I wonder what people think about moving the migration tests out of the test tree and into the versions directory so that they are beside the migrations. This would make it clearer which migrations lack tests, and would reduce the length of test_migrations.py, which is starting to get out of hand at 3,478 lines.

      There's one last thing I want to say about migrate migrations before I move onto discussing alembic. One of the features of migrate is that schema migrations are linear, which I think is a feature not a limitation. In the Havana (and presumably Icehouse) releases there has been significant effort from Mirantis and Rackspace Australia to fix bugs in database migrations in nova. To be frank, we do a poor job of having reliable migrations, even in the relatively simple world of linear migrations. I strongly feel we'd do an even worse job if we had non-linear migrations, and I think we need to require that all migrations be sequential as a matter of policy. Perhaps one day when we're better at writing migrations we can vary that, but I don't think we're ready for it yet.

      Alembic

      An example of an existing user of alembic in openstack is neutron, so I took a look at their code to work out what migrations in nova using alembic might look like. First off, here's the work flow for adding a new migration:

      First off, have a read of neutron/db/migration/README. The process involves more tools than nova developers will be used to, its not a simple case of just adding a manually written file to the migrations directory. First off, you need access to the neutron-db-manage tool to write a migration, so setup neutron.

      Just as an aside, the first time I tried to write this blog post I was on an aeroplane, with no network connectivity. Its is frustrating that writing a new database migration requires network connectivity if you don't already have the neutron tools setup in your development environment. Even more annoyingly, you need to have a working neutron configuration in order to be able to add a new migration, which slowed me down a fair bit when I was trying this out. In the end it seems the most expedient way to do this is just to run up a devstack with neutron configured.

      Now we can add a new migration:

        $ neutron-db-manage --config-file /etc/neutron/neutron.conf \
        --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
        revision -m "funky new database migration" \
        --autogenerate
        No handlers could be found for logger "neutron.common.legacy"
        INFO  [alembic.migration] Context impl MySQLImpl.
        INFO  [alembic.migration] Will assume non-transactional DDL.
        INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_tenants'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_vxlan_allocations'
        INFO  [alembic.autogenerate] Detected removed table u'cisco_ml2_nexusport_bindings'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_vxlan_endpoints'
        INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_vms'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_flat_allocations'
        INFO  [alembic.autogenerate] Detected removed table u'routes'
        INFO  [alembic.autogenerate] Detected removed table u'cisco_ml2_credentials'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_gre_allocations'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_vlan_allocations'
        INFO  [alembic.autogenerate] Detected removed table u'servicedefinitions'
        INFO  [alembic.autogenerate] Detected removed table u'servicetypes'
        INFO  [alembic.autogenerate] Detected removed table u'arista_provisioned_nets'
        INFO  [alembic.autogenerate] Detected removed table u'ml2_gre_endpoints'
          Generating /home/mikal/src/openstack/neutron/neutron/db/migration/alembic_migrations/
        versions/297033515e04_funky_new_database_m.py...done
        


        This command has allocated us a migration id, in this case 297033515e04. Interestingly, the template migration drops all of the tables for the ml2 driver, which is a pretty interesting choice of default.

        There are a bunch of interesting headers in the migration python file which you need to know about:

          """funky new database migration
          
          Revision ID: 297033515e04
          Revises: havana
          Create Date: 2013-11-04 17:12:31.692133
          
          """
          
          # revision identifiers, used by Alembic.
          revision = '297033515e04'
          down_revision = 'havana'
          
          # Change to ['*'] if this migration applies to all plugins
          
          migration_for_plugins = [
              'neutron.plugins.ml2.plugin.Ml2Plugin'
          ]
          


          The developer README then says that you can check your migration is linear with this command:

            $ neutron-db-manage --config-file /etc/neutron/neutron.conf \
            --config-file /etc/neutron/plugins/ml2/ml2_conf.ini check_migration
            


            In my case it is fine because I'm awesome. However, it is also a little worrying that you need a tool to hold your hand to verify this because its too hard to read through the migrations to verify it yourself.

            So how does alembic go with addressing the concerns we have with the nova database migrations? Well, alembic is currently supported by an upstream other than OpenStack developers, so alembic addresses that concern. I should also say that alembic is obviously already in use by other OpenStack projects, so I think it would be a big ask to move to something other than alembic.

            Alembic does allow linear migrations as well, but its not enforced by the tool itself (in other words, non-linear migrations are supported by the tooling). That means there's another layer of checking required by developers in order to maintain a linear migration stream, and I worry that will introduce another area in which we can make errors and accidentally end up with non-linear migrations. In fact, in the example of multiple patches competing to be the next one in the line alembic is worse, because the headers in the migration file would need to be updated to ensure that linear migrations are maintained.

            Conclusion

            I'm still not convinced alembic is a good choice for nova, but I look forward to a lively discussion at the design summit about this.

            Tags for this post: openstack icehouse migrate alembic db migrations
            Related posts: Exploring a single database migration; On Continuous Integration testing for Nova DB

            Comment

            Syndicated 2013-11-03 22:52:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!