Chicago is currently certified at Journeyer level.

Name: James Taylor
Member since: 2002-04-15 13:29:20
Last Login: 2013-04-30 15:34:22

FOAF RDF Share This

Homepage: www.imen.org.uk

Notes:

Working in Norwich UK for a technology company, working on interesting products for various markets. The work involves some web programming for both Websites and applications requiring data from a centralized server system. Also a fully licensed Radio Ham, holding the licenses: M0OUZ, 2E0OUZ and M3OUZ (in decreasing order of relevance). Undergoing personal development with the IET.

Articles Posted by Chicago

Recent blog entries by Chicago

Syndication: RSS 2.0

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


Syndicated 2014-07-10 13:18:17 from jejt / jmons

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


Syndicated 2014-07-10 13:15:53 from jejt / jmons

21 Jul 2012 (updated 30 Apr 2013 at 12:57 UTC) »

Now Your All Dreams Will Going To Become Reality with Your Own Home Business

Post removed as was spam through aggregator.

Syndicated 2012-07-21 02:55:23 from jejt / jmons

Continual Integration Development and the Cloud

One of the big buzz phrases at the moment seems to be Continual Integration Development. If you’re developing and wanting to deploy ‘as the features are ready’, and you have a cloud you have two main options, which both have pros and con’s but:

New Image per Milestone

Most cloud systems work by you taking an ‘image’ of a pre-setup machine, then booting new instances of these images. Each time you get to a milestone, you setup a new image, and then setup your auto scaling system to launch these instances rather then the old one, but you have to shut down all your old images and bring them up as new ones.

Pro’s: The machines come up in the new state quickly.
Con’s: For each deployment, you have to do quite a bit more work making the new image. Each deployment requires shutting down all the old images and bringing up new replacements.

SCM Pull on Boot

Make one image, and give it access to your SCM (i.e. git / svn etc). Build in a boot process that brings up the service but also fetches the most recent copy of the ‘live’ branch.

Pro’s: You save a lot of time in deployments – deployments are triggered by people committing to the live branch, rather then by system administrators performing the deployments. Because they are running SCM, updating all the currently running images is as simple as just running the fetch procedure again.
Con’s: You do need to maintain two branches: a live and a dev branch, and merge (some SCM’s might not like this). Also, your SCM hosting has to be able to cope with when you get loads (i.e. when new computers get added). Your machines come up a little slower as they have to do the fetch before they are usable.

I opted for the second route: we use Git, so we can clone quickly to the right branch. We’ve also added in git hooks that make sure any setup procedures (copying the right settings file in) are done when the computer comes up. Combining this with a fabric script to update all the currently running boxes is a dream.


Syndicated 2011-03-20 11:10:17 from jejt / jmons

What Cloud Computing System to Use?

So you’re sitting at work, and you have to build a new system, and for once you don’t have any previous code or language forcing you to write one way or another, and you know this is going to get big – maybe not Twitter or Google big, but certainly big enough to give you a good old headache. The big question becomes what technology to use. Firstly, I apologise that this is early 2011, so if you’re reading this in two or three years (or even six months) then the technology will all change again – I’m not planning on updating this particular post as the stuff changes, but I might make new ones.

So the first decision to make is what Cloud Computing system you’re going to use – are you doing lots of queries, or just a few queries and lots of processing. I’m presuming the first one, but the latter one is quite interesting – it deals with universities and researchers trying to run on massive data sets and producing reports.

Your main contenders are:

  • Some collection of *nux or Windows servers
  • Proprietary cloud compute services

The first category might mean more work for you and your sysadmins – it really does point towards the requirements of a sysadmin but gives you a lot more flexibility in your choice of languages and systems, where as the latter might mean that you are able to do with out those, and also (depending on the service) have access to a lot of tools and power without having to use any other third party services.

The main propietary services at the moment seem to be:

  • Google’s Apps
  • Microsoft Azure

Now – both of these platforms are quite seductive, they have a lot of benefits – mainly that you don’t have to be a sysadmin to deploy and maintain the system, that you can access quite complicated things such as shared and persistent storage, caching and database pooling without having to actually spend days reading various manuals for everything.

The downside though? You are locked to one provider and their billing methods – Apps has a very strange billing mechanism to do with the number of users (Which if you’re producing something for a lot of users, that might be very expensive), and because you’re locked into that service provider, there isn’t another provider who you can goto for alternative pricing, and because of this, I think a lot of smaller businesses make a commercial decision to go with the more traditional style hosting.

Traditional clusters (such as provided by Amazon and Rackspace) seem to provide a collection of tools along side. Content distribution networks for static content such as images and javascipt, as well as tools for monitoring and automatically scaling the systems. The advantages of the traditional route is that its easy to run up a local copy at your location and develop away, which means when you are looking at doing architectural changes, these are much easier to stage to live.

In my opinion, it seems that commercial reasons for going down cloud hosting of systems which are ‘traditional linux/windows’ boxes have massive advantages, but do require more systems-administration work.


Syndicated 2011-03-20 10:05:01 from jejt / jmons

149 older entries...

 

Chicago certified others as follows:

  • Chicago certified Chicago as Apprentice
  • Chicago certified iDunno as Apprentice
  • Chicago certified slef as Journeyer
  • Chicago certified johnb as Apprentice
  • Chicago certified salmoni as Journeyer
  • Chicago certified sand as Journeyer
  • Chicago certified raph as Master
  • Chicago certified Bram as Journeyer
  • Chicago certified shughes as Journeyer
  • Chicago certified advogato as Master
  • Chicago certified hadess as Journeyer
  • Chicago certified Artimage as Journeyer
  • Chicago certified mathieu as Journeyer
  • Chicago certified logic as Master
  • Chicago certified monkeyiq as Journeyer
  • Chicago certified CaptainNemo as Journeyer
  • Chicago certified Stevey as Master
  • Chicago certified lerdsuwa as Journeyer
  • Chicago certified lkcl as Master
  • Chicago certified grape as Apprentice
  • Chicago certified aftyde as Journeyer
  • Chicago certified robilad as Master
  • Chicago certified nymia as Journeyer
  • Chicago certified dmitri as Apprentice
  • Chicago certified arauzo as Journeyer
  • Chicago certified ncunningham as Journeyer
  • Chicago certified wtanaka as Apprentice

Others have certified Chicago as follows:

  • iDunno certified Chicago as Apprentice
  • Chicago certified Chicago as Apprentice
  • slef certified Chicago as Apprentice
  • salmoni certified Chicago as Journeyer
  • johnb certified Chicago as Apprentice
  • nixnut certified Chicago as Apprentice
  • mattr certified Chicago as Apprentice
  • fxn certified Chicago as Apprentice
  • bytesplit certified Chicago as Journeyer
  • sand certified Chicago as Journeyer
  • rkrishnan certified Chicago as Journeyer
  • splork certified Chicago as Apprentice
  • Mysidia certified Chicago as Journeyer
  • spiff certified Chicago as Apprentice
  • lerdsuwa certified Chicago as Apprentice
  • ncm certified Chicago as Apprentice
  • ara0d3nt16 certified Chicago as Apprentice
  • bgeiger certified Chicago as Journeyer
  • pesco certified Chicago as Apprentice
  • dmitri certified Chicago as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page