Older blog entries for mikal (starting at number 893)

The Hunger Games




ISBN: 9780439023528
LibraryThing
I picked this up in the US really cheap because I had run out of books to read on this trip. This book is pretty heavily hyped at the moment, but that's also why I got the book for $6 at a book store, so I can't complain. The book is an easy read, and fun. Its obviously aimed at teenagers, but I don't mind teen fiction as a genre and I read this book in a little over a day. The story line is similar to The Survivor in Battlefields Beyond Tomorrow, but is distinct enough to not be plagiarism. I enjoyed this book.

Tags for this post: book suzanne_collins combat hunting post_apocalypse
Related posts: Death Bringer; Battlefields Beyond Tomorrow ; East of the Sun, West of the Moon; Canned hunting; Bolos 1: Honor of the Regiment; Iron Master; Cloud Warrior; Amtrak Wars; Earth Thunder; First Family; Emerald Sea; Body Armor: 2000; Without Warning; Blood River; Against the Tide; The Stars Must Wait; Bolos 2: The Unconquerable; There Will Be Dragons


Comment

Syndicated 2012-04-16 14:04:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

The Android's Dream




ISBN: 9780765348289
LibraryThing
This is a Scalzi book, so its clever and funny, and has possibly one of the best first sentences I have read ever. It is a light read, and I finished all of it apart from the last 50 pages or so on a single flight. Scalzi also plays again with the idea of transferring consciousness, which is something which he deals with a lot in the Old Man's War series. I liked this book.

Tags for this post: book john_scalzi genetic_engineering aliens
Related posts: Cyteen: The Vindication; East of the Sun, West of the Moon; The White Dragon; Runner; Cyteen: The Betrayal; Rendezvous With Rama; Logos Run; Emerald Sea; Nerilka's Story; Dragonsinger; There Is No Darkness; The Dolphins of Pern; Dragondrums; Dragonquest; The Renegades of Pern; Dragonsdawn; Cyteen: The Rebirth; Against the Tide; The Chronicles of Pern: First Fall; The Dragonlover's Guide to Pern; Dragonsong


Comment

Syndicated 2012-04-14 13:48:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Logos Run




ISBN: 0441015360
LibraryThing
This is the continuation from Runner, and continues the story of the attempt to re-enable the star gates. It has the comicly incompetent Technosociety once again, as well as series of genetically engineered protagonists. I am bothered by why the star gate power supplies cause people to fall ill -- you'd think in a highly advanced society capable of building star gates they might have spent some time on shielding. Or did the shielding somehow fail on all the power sources sometime over the thousands of years of decay? The has a disappointing ending, but was a fun read until then. I find it hard to suspend disbelief about how the AIs present themselves, but apart from that the book was solid. This one is probably not as good as the first.

Tags for this post: book william_c_dietz religion combat space_travel decay courier engineered_human genetic_engineering runner_series
Related posts: Runner; The Accidental Time Machine ; Rendezvous With Rama; Friday ; Cyteen: The Vindication; Battlefields Beyond Tomorrow ; East of the Sun, West of the Moon; The White Dragon; The Moon Is A Harsh Mistress; The Last Colony ; Zoe's Tale; The Ship Who Sang ; Down and Out in the Magic Kingdom; Cyteen: The Betrayal; Starbound; Patron saints; Buying Time; Marsbound; Red Mars; Emerald Sea; Snow Crash


Comment

Syndicated 2012-04-14 13:45:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

11 Apr 2012 (updated 11 Apr 2012 at 01:10 UTC) »

Folsom Dev Summit sessions

I thought I should write up the dev summit sessions I am hosting now that the program is starting to look solid. This is mostly for my own benefit, so I have a solid understanding of where to start these sessions off. Both are short brainstorm sessions, so I am not intending to produce slide decks or anything like that. I just want to make sure there is something to kick discussion off.

Image caching, where to from here (nova hypervisors)

As of essex libvirt has an image cache to speed startup of new instances. This cache stores images direct from glance, as well as resized images. There is a periodic task which cleans up images in the cache which are no longer needed. The periodic task can also optionally detect images which have become corrupted on disk.

So first off, do we want to implement this for other hypervisors as well? As mentioned in a recent blog post I'd like to see the image cache manager become common code and have all the hypervisors deal with this in exactly the same manner -- that makes it easier to document, and means that on-call operations people don't need to determine what hypervisor a compute node is running before starting to debug. However, that requires the other hypervisor implementations to change how they stage images for instance startup, and I think it bears further discussion.

Additionally, the blueprint (https://blueprints.launchpad.net/nova/+spec/nova-image-cache-management) proposed that popular / strategic images could be pre-cached on compute nodes. Is this something we still want to do? What factors do we want to use for the reference implementation? I have a few ideas here that are listed in the blueprint, but most of them require talking to glance to implement. There is some hesitance in adding glance calls to a periodic task, because in a keystone'd implementation that would require an admin token in the nova configuration file. Is there a better way to do this, or is it ok to rely on glance in a periodic task?

Ops pain points (nova other)

Apart from my own ideas (better instance logging for example), I'm very interested in hearing from other people about what we can do to make nova easier for ops people to run. This is especially true for relatively easy to implement things we can get done in Folsom. This blueprint for deployer friendly configuration files is a good example of changes which don't look too hard to implement, but that would make the world a better place for opsen. There are many other examples of blueprints in this space, including:



What else can we be doing to make life better for opsen? I'm especially interested in getting people who actually run openstack in the wild into the room to tell us what is painful for them at the moment.

Tags for this post: openstack canonical folsom image_cache_management sre
Related posts: Reflecting on Essex; Further adventures with base images in OpenStack; Openstack compute node cleanup; Managing MySQL the Slack Way: How Google Deploys New MySQL Servers; I won a radio shark and headphones!; Conference Wireless not working yet?; Taking over a launch pad project; Off to the MySQL tutorials; Links from Rasmus' PHP talk; MySQL Workbench; Slow git review uploads?; Thoughts on the first day of the MySQL user's conference; MySQL cluster stores in RAM!; Wow, qemu-img is fast; Registered for MySQL User Conference 2006; Are you in a LUG? Do you want some promotional materials for LCA 2013?; Announcement video; linux.conf.au Returns to Canberra in 2013; The next thing; MySQL Users Conference; Managing MySQL the Slack Way: How Google Deploys New MySQL Servers

Comment

Syndicated 2012-04-10 16:12:00 (Updated 2012-04-11 01:10:03) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Reflecting on Essex

This post is kind of long, and a little self indulgent. However, I really wanted to spend some time thinking about what I did for the Essex release cycle, and what I want to do for the Folsom release. I spent Essex mostly hacking on things in isolation, except for when Padraig Brady and I were hacking in a similar space. I'd like to collaborate more for Folsom, and I'm hoping talking about what I'm interested in doing in public might help with that.

I came relatively late to the Essex development cycle, having never even heard of OpenStack before joining Canonical. We can talk about how I'd worked in the cloud space for six years and yet wasn't aware of the open source implementations at some other time.

My initial introduction to OpenStack was being paged for compute nodes which were continually running out of disk. I googled around a bit and discovered that cached images for instances were never cleaned up (to start an instance, an image is fetched from glance, possibly has its format converted, is resized, and then an instance started with that resulting image, all those images were never being cleaned up). I filed bug 904532 as my absolute first interaction with the OpenStack community. Scott Moser kindly pointed me at the blueprint for how to actually fix the problem.

(Remind me if Phil Day comes to the OpenStack developer summit that I should sit down with him at some point and see how what close what was actually implemented got to what he wrote in that blueprint. I suspect we've still got a fair way to go, but I'll talk more about that later in this post).

This was a pivotal moment. I'd just spent the last six years writing python code to manage largish cloud clusters, and here was a bug which was hurting me in a python package intended to manage clusters very similar to those I had been running. I should just fix the bug, right?

It turns out that the OpenStack core developers are super easy to work with. I'd say that the code review process certainly feels like it was modelled on Google's but in general the code reviewers are nicer with their comments that what I'm used to. This makes it much easier to motivate yourself to go and spend some more time hacking that a deeply negative review would. I think Vish is especially worthy of a shout out as being an amazing person to work with. He's helpful, patient, and very smart.

In the end I wrote the image cache manager which ships in Essex. Its not perfect, but its a lot better than what came before, and its a good basis to build on. There is some remaining tech debt for image cache management which I intend to work on for Folsom. First off, the image cache only works for libvirt instances at the moment. I'd like to pull all the other hypervisors into line as best as possible. There are hooks in the virtualization driver for this, but no one has started this work as best as I am aware. To be completely honest I'd like to see the image cache manager become common code and have all the hypervisors deal with this in exactly the same manner -- that makes it easier to document, and means that on-call operations people don't need to determine what hypervisor a compute node is running before starting to debug. This is something I very much want to sit down with other nova developers and talk about at the summit.

The next step for image cache management is tracked in a very bare bones blueprint. The original blueprint envisaged that it would be desirable to pre-cache some images on all nodes. For example, a cloud host might want to offer slightly faster startup times for some images by ensuring they are pre-cached. I've been thinking about this a lot, and I can see other use cases here as well. For example, if you have mission critical instances and you wanted to tolerate a glance failure, then perhaps you want to pre-cache a class of images that serve those mission critical instances. The intention is to provide an interface and default implementation for the pre-caching logic, and then let users go wild working out their own requirements.

The hardest bit of the pre-caching will be reducing the interactions with glance I suspect. The current feeling is that calling glance from a periodic task is a bit scary, and has been actively avoided for Essex. This is especially true if Keystone is enabled, as the periodic task wont have an admin context unless we pull that from the config file. However, if you're trying to determine what images are mission critical, then you really need to talk to glance. I guess another option would be to have a table of such things in nova's database, but that feels wrong to me. We're going to have to talk about this bit more.

(It would be interesting as well to talk about the relative priority of instances as well. If a cluster is experiencing outages, then perhaps some customers would pay more to have their instances be the last killed off or something. Or perhaps I have instances which are less critical than others, so I want the cluster to degrade in an understood manner.)

That leads logically onto a scheduler change I would like to see. If I have a set of compute nodes I know already have the image for a given instance, shouldn't I prefer to start instances on those nodes instead of fetching the image to yet more compute nodes? In fact, if I already have a correctly resized COW base image for an instance on a given node, then it would make sense to run a new instance on that node as well. We need to be careful here, because you wouldn't want to run all of a given class of instance on a small set of compute nodes, but if the image was something like a default Ubuntu image, then it would make sense. I'd be interested in hearing what other people think of doing something like this.

Another thing I've tried to focus on for Essex is making OpenStack easier for operators to run. That started off relatively simply, by adding an option for log messages to specify what instance a message relates to. This means that when a user queries the state of their instance, the admin can now just grep for the instance UUID, and run from there. Its not perfect yet, in that not all messages use this functionality, but that's some tech debt that I will take on in Folsom. If you're a nova developer, then please pass instance= in your log messages where relevant!

This logging functionality isn't perfect, because if you only have the instance UUID in the method you're writing, it wont work. It expects full instance dicts because of the way the formatting code works. This is kind of ironic in that the default logging format only includes the UUID. In Folsom I'll also extend this code so that the right thing happens with UUIDs as well.

Another simple logging tweak I wrote is that tracebacks now have the time and instance included in them. This makes it much easier for admins to determine the context of a traceback in their logs. It should be noted that both of these changes was relatively trivial, but trivial things can often make it much easier for others.

There are two sessions at the Folsom dev summit talking about how to make OpenStack easier for operators to run. One was from me, and the other is from Duncan McGreggor. Neither has been accepted yet, but if I notice that Duncan's was accepted I'll drop mine. I'm very very interested in what operations staff feel is currently painful, because having something which is easy to scale and manage is vital to adoption. This is also the core of what I did at Google, and I feel I can make a real contribution here.

I know I've come relatively late to the OpenStack party, but there's heaps more to do here and I'm super enthused to be working on code that I can finally show people again.

Tags for this post: openstack canonical essex folsom image_cache_management sre
Related posts: Further adventures with base images in OpenStack; Openstack compute node cleanup; Managing MySQL the Slack Way: How Google Deploys New MySQL Servers; I won a radio shark and headphones!; Conference Wireless not working yet?; Taking over a launch pad project; Off to the MySQL tutorials; Links from Rasmus' PHP talk; MySQL Workbench; Slow git review uploads?; Thoughts on the first day of the MySQL user's conference; MySQL cluster stores in RAM!; Wow, qemu-img is fast; Registered for MySQL User Conference 2006; Are you in a LUG? Do you want some promotional materials for LCA 2013?; Announcement video; linux.conf.au Returns to Canberra in 2013; The next thing; MySQL Users Conference; Managing MySQL the Slack Way: How Google Deploys New MySQL Servers; Call for papers opens soon

Comment

Syndicated 2012-04-05 18:19:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Call for papers opens soon

It's time to start thinking about your talk proposals, because the call for papers is only eight weeks away!

For the 2013 conference, the papers committee are going to be focusing on deep technical content, and things we think are going to really matter in the future -- that might range from freedom and privacy, to open source cloud systems, or energy efficient server farms of the future. However, the conference is to a large extent what the speakers make it -- if we receive many excellent submissions on a topic, then its sure to be represented at the conference.

The papers committee will be headed by the able combination of Michael Davies and Mary Gardiner, who have done an excellent job in previous years. They're currently working through the details of the call for papers announcement. I am telling you this now because I want speakers to have plenty of time to prepare for the submissions process, as I think that will produce the highest quality of submissions.

I also wanted to let you know the organising for linux.conf.au 2013 is progressing well. We're currently in the process of locking in all of our venue arrangements, so we will have some announcements about that soon. We've received our first venue contract to sign, which is for the keynote venue. It's exciting, but at the same time a good reminder that the conference is a big responsibility.

What would you like to see at the conference? I am sure there are things which are topical which I haven't thought of. Blog or tweet your thoughts (include the hashtag #lca2013 please), or email us at contact@lca2013.linux.org.au.

Tags for this post: conference lca2013 cfp canonical
Related posts: Taking over a launch pad project; LCA 2006: CFP closes today; Slow git review uploads?; Further adventures with base images in OpenStack; Wow, qemu-img is fast; Are you in a LUG? Do you want some promotional materials for LCA 2013?; Announcement video; linux.conf.au Returns to Canberra in 2013; The next thing; Openstack compute node cleanup

Comment

Syndicated 2012-04-02 20:45:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Memorial service details

This is what will be published in the paper on Wednesday this week:

Robyn Barbara Boland
24 April 1948 - 30 March 2012

Dearly loved and cherished mother of
Catherine and Michael, Emily and Justin
Jonathan and Lynley, and Allister.
Proud Ma of Andrew and Matthew.

Robyn took Jesus' hand and
walked peacefully
into her Heavenly Father's arms.
She was a friend to all who met her.
Robyn will be deeply missed.


A celebration of Robyn's life will be held
at Woden Valley Alliance Church,
81 Namatjira Drive, Waramanga on
Tuesday, 10 April 2012 commencing at 1pm.


Tags for this post: health robyn liver funeral
Related posts: A further update on Robyn's health; RIP Robyn Boland; Doh...; Weekend update; Bigger improvements; Robyn's Health; More on Robyn; Update on Robyn from Catherine; Continued improvement; Small improvements

Comment

Syndicated 2012-04-02 00:41:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Update on Robyn from Catherine

I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.


Cat sent this update out to robyn-discuss last night, but I am reposting it here for those who aren't on the mailing list.

25 Mar 2012 (updated 25 Mar 2012 at 11:08 UTC) »

Weekend update

I apologize if there are factual inaccuracies in this post. It has been written with the best information I have available at the time.


Robyn's blood test results are showing a slight decline in both liver and kidney function. She was awake for slightly longer periods this morning, but was back to being really sleepy this afternoon. Based on her ability to stay awake this morning they were talking about removing her breathing tube.

Robyn is still breathing on her own but she but they want the breathing tube to stay in place until she is more conscious and able to be roused. Disappointingly this afternoon she was back to being mostly un-responsive. Over all her condition is stable but she has a long way to go.

Tags for this post: health robyn liver sydney
Related posts: A further update on Robyn's health; Bigger improvements; Robyn's Health; More on Robyn; Continued improvement; Small improvements; In Sydney!; In Sydney for the day; Planes at 600 meters!; Sydney next week; Getting ready to leave Sydney; What are we doing with the pets?; Slack talk at SLUG; Don't use Jetbus Sydney if you want to catch your flight; Travel details so far; In Sydney; Sydney 1, Mikal 1; Sydney redeems itself, if only a little; Google? Sydney?; On the potentially sorry state of second hand science fiction book stores in Sydney

Comment

Syndicated 2012-03-25 02:59:00 (Updated 2012-03-25 11:08:32) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

884 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!