skvidal is currently certified at Master level.

Name: Seth Vidal
Member since: 1999-11-09
Last Login: 2007-06-15 23:20:27

FOAF RDF Share This

Homepage: http://blog.sethdot.org

Projects

Articles Posted by skvidal

  • Colors 9 Nov 1999 at 08:00 UTC

Recent blog entries by skvidal

Syndication: RSS 2.0

openstack name changes

dear #openstack people.

I just read

http://osdir.com/featured/openstack-cloud-computing

From now on you will stop it with the cutsie naming.

the network bits will be called ‘network’
the compute bits will be called ‘compute’
the block storage will be called ‘blockstore’
the object store will be called ‘objectstore’
the authn/z bits will be called ‘authenticaton’
the image storage will be called ‘imagestore’

If there are other major components you need – they will named precisely based on what they are.
If you rev those pieces in major ways you will just iterate the major version number.

If you cannot cope with these rules someone is going to drop heavy things near your toes.

You have used up all your name change turns. You are done.


Syndicated 2013-06-19 20:50:21 from journal/notes

ansible as infrastructure-wide cron

A discussion last week made me think of the following:
Ansible as a mechanism to provide network/infrastructure-wide cron.

A couple of systems that do major administrative tasks could have a infra-cron file like:

01 04 * * * root run_system_wide_task
0 01 * * Sun root trigger_client_backups

Now, I’m sure lots of you are saying ‘yes, that’s cron, you don’t need another one’ but with ansible you could have an orchestrated cron. A cron that properly says ‘wait for the previous task to finish before you launch this other one’ or a cron that is able to better contingency handling if some of your systems are offline or disconnected.

I don’t have any code for this but I wanted to toss it out as a potentially odd idea that maybe someone would love.


Syndicated 2013-06-11 03:38:33 from journal/notes

documenting for posterity – ansible – wait for a dir to exist before continuing

Got a ridiculous process **cough**Jenkins**Cough** that you have to wait to create a dir before doing things?

This might help you as godawful ugly as it is.

– name: wait for a dir to exist – this is just ugly
shell: while `true`; do [ -d /var/lib/jenkins/plugins/openid/WEB-INF/lib/ ] && break; sleep 5; done
async: 1800
poll: 20


Syndicated 2013-05-23 21:07:15 from journal/notes

sorting srpms by buildorder

Hey folks,
Working on something for Spot I revived some code I had written a
few years ago and then discovered that other people had made much more
robust leveled topological sorts than I had written :)

Anyway – if you grab the files from:

http://skvidal.fedorapeople.org/misc/buildorder/

And run:

python buildorder.py /path/to/*.src.rpm

it will look up the interdependencies of the src.rpm to figure out a
build order. It outputs a bunch of different things:
1. a flat build order
2. a build order broken out by groups – you can build all the pkgs in
any group in parallel provided that all the pkgs in the previous group
have finished building.
3. outputs lists of direct loops between srpms.
4. probably will output A LOT of noise and garbage from the rpm
specfile parsing from the rpm.spec() module

But it might be worth a look at and, ideally, patches to make it a bit
more robust.

If you have a set of pkgs which you need to build but you can’t figure
out the buildorder this might help you out.

I’d love to know how often it is right or ‘right enough’.

Known Issues:
1. some spec files make the rpm.spec() parsing break in interesting
ways – sometimes tracing back :)
2. if a pkg is not dependent on any other pkg and nothing else depends
on it – they get lumped in the last grouping. Not really an issue -
just something someone noticed and was surprised.
3. It will handle file-buildreqs not at all, it will handle virtual
provide buildreqs, not at all, if your buildreqs are REALLY picky about
requiring <= Version – it will ignore all of that. :)
4. I fully expect that 2 or more level circular build deps (foo req bar
req baz req quux) will not be detected but will make the topological
sort function die). If so…. tough… go fix your packaging.

Anyway – give it a run and see if it helps you solve a problem.

If it does let me know about it. Some of us are curious if this could
fit well in mockchain or wrapped around/in mockchain.


Syndicated 2013-05-17 19:09:37 from journal/notes

adding an openstack cinder volume server to an existing cloud with an existing cinder setup

We needed more space for cinder and had no nice way to expand it on our existing cinder server so after banging my head a bit I got assistance from Giulio Fidente who was able to show me a working config that let me figure out what I was missing. Below I document it so others might be able to find it, too.

NOTE: this works under folsom on rhel 6.4. I cannot vouch for anything else -but Giulio had it running on grizzly I think so…

Usage:

You have an existing cinder server setup and running – which includes
a volume server, an api service and a scheduler service. You need to
add more space and you have a system where that can run.

Here’s all you need to do:

1. install openstack-cinder on the server you want to be a new volume server

2. make sure your new system can access the mysql server on your primary
controller system

3. make sure tgtd knows to import the files /etc/cinder/volumes

add
include /etc/cinder/volumes/*
to:
/etc/tgt/targets.conf

4. make sure your other computer nodes can access the iscsi-target port
iscsi-target 3260/tcp on the system you want to add as an cinder-volume server

5. setup your /etc/cinder/cinder.conf
example:

[DEFAULT]
sql_connection = mysql://cinder_user:cinder_pass@mysqlhost/cinder
api_paste_config=/etc/cinder/api-paste.ini
auth_strategy = keystone
rootwrap_config = /etc/cinder/rootwrap.conf
rpc_backend = cinder.openstack.common.rpc.impl_qpid
qpid_hostname = qpid_hostname_ip_here
volume_group = cinder-volumes
iscsi_helper = tgtadm
iscsi_ip_address = my_volume_ip
logdir = /var/log/cinder
state_path = /var/lib/cinder
lock_path = /var/lib/cinder/tmp
volumes_dir = /etc/cinder/volumes

6. start tgtd and openstack-cinder-volume

service tgtd start
service openstack-cinder-volume start

7. check out /var/log/cinder/volume.log

8. Verifying it worked:
on your cloud controller run:
cinder-manage host list
you should see all of your volume servers there.

9. creating a volume. – just make a volume as usual – the scheduler
should default to the volume server with the most space available

10. on your new cinder-volume server run lvs to look for the new volume.


Syndicated 2013-04-29 22:36:22 from journal/notes

463 older entries...

 

skvidal certified others as follows:

  • skvidal certified pnasrat as Journeyer

Others have certified skvidal as follows:

  • goran certified skvidal as Journeyer
  • spot certified skvidal as Journeyer
  • jLoki certified skvidal as Apprentice
  • sh certified skvidal as Apprentice
  • stone certified skvidal as Journeyer
  • malcolm certified skvidal as Master
  • redi certified skvidal as Journeyer
  • jkeating certified skvidal as Master
  • mitr certified skvidal as Master
  • bobuk certified skvidal as Master
  • Thias certified skvidal as Master
  • mterry certified skvidal as Journeyer
  • walters certified skvidal as Journeyer
  • lerdsuwa certified skvidal as Master
  • jnewbigin certified skvidal as Master
  • lkundrak certified skvidal as Master
  • ricky certified skvidal as Master
  • ianweller certified skvidal as Master

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page