Recent blog entries for hugoduncan

Configure Nagios using Pallet

Basic Nagios support was recently added to pallet, and while very simple to use, this blog post should make it even simpler. The overall philosophy is to configure the nagios service monitoring definitions along with the service itself, rather than have monolithic nagios configuration, divorced from the configuration of the various nodes.

As an example, we can configure a machine to have it's ssh service, CPU load, number of processes and number of users monitored. Obviously, you would normally be monitoring several different types of nodes, but there is no difference as far as pallet is concerned.

We start by requiring various pallet components. These would normally be part of a ns declaration, but are provided here for ease of use at the REPL.

(require
  '[pallet.crate.automated-admin-user
    :as admin-user]
  '[pallet.crate.iptables :as 'iptables]
  '[pallet.crate.ssh :as ssh]
  '[pallet.crate.nagios-config
     :as nagios-config]
  '[pallet.crate.nagios :as nagios]
  '[pallet.crate.postfix :as postfix]
  '[pallet.resource.service :as service])

Node to be Monitored by Nagios

Now we define the node to be monitored. We set up a machine that has SSH running, and configure iptables to allow access to SSH, with a throttled connection rate (six connections/minute by default).

(pallet.core/defnode monitored
  []
  :bootstrap [(admin-user/automated-admin-user)]
  :configure [;; set iptables for restricted access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              ;; allow connections to ssh
              ;; but throttle connection requests
              (ssh/iptables-throttle)
              (ssh/iptables-accept)])

Monitoring of the SSH service is configured by simply adding (ssh/nagios-monitor).

Remote monitoring is implemented using nagios' nrpe plugin, which we add with (nagios-config/nrpe-client). To make nrpe accessible to the nagios server, we open the that the nrpe agent runs on using (nagios-config/nrpe-client-port), which restricts access to the nagios server node. We also add a phase, :restart-nagios, that can be used to restart the nrpe agent.

Pallet comes with some configured nrpe checks, and we add nrpe-check-load, nrpe-check-total-proces and nrpe-check-users. The final configuration looks like this:

(pallet.core/defnode monitored
  []
  :bootstrap [(admin-user/automated-admin-user)]
  :configure [;; set iptables for restricted access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              ;; allow connections to ssh
              ;; but throttle connection requests
              (ssh/iptables-throttle)
              (ssh/iptables-accept)
              ;; monitor ssh
              (ssh/nagios-monitor)
              ;; add nrpe agent, and only allow
              ;; connections from nagios server
              (nagios-config/nrpe-client)
              (nagios-config/nrpe-client-port)
              ;; add some remote checks
              (nagios-config/nrpe-check-load)
              (nagios-config/nrpe-check-total-procs)
              (nagios-config/nrpe-check-users)]
  :restart-nagios [(service/service
                    "nagios-nrpe-server"
                    :action :restart)])

Nagios Server

We now configure the nagios server node. The nagios server is installed with (nagios/nagios "nagiospwd"), specifying the password for the nagios web interface, and add a phase, :restart-nagios, that can be used to restart nagios.

Nagios also requires a MTA for notifications, and here we install postfix. We add a contact, which we make a member of the "admins" contact group, which is notified as part of the default host and service templates.

(pallet.core/defnode nagios
  []
  :bootstrap [(admin-user/automated-admin-user)]
  :configure [;; restrict access
              (iptables/iptables-accept-icmp)
              (iptables/iptables-accept-established)
              (ssh/iptables-throttle)
              (ssh/iptables-accept)
              ;; configure MTA
              (postfix/postfix
               "pallet.org" :internet-site)
              ;; install nagios
              (nagios/nagios "nagiospwd")
              ;; allow access to nagios web site
              (iptables/iptables-accept-port 80)
              ;; configure notifications
              (nagios/contact
              {:contact_name "hugo"
               :service_notification_period "24x7"
               :host_notification_period "24x7"
               :service_notification_options
                  "w,u,c,r"
               :host_notification_options
                  "d,r"
               :service_notification_commands
                 "notify-service-by-email"
               :host_notification_commands
                  "notify-host-by-email"
               :email "my.email@my.domain"
               :contactgroups [:admins]})]
  :restart-nagios [(service/service "nagios3"
                     :action :restart)])

Trying it out

That's it. To fire up both machines, we use pallet's converge command.

(pallet.core/converge
  {monitored 1 nagios 1} service
  :configure :restart-nagios)

The nagios web interface is then accessible on the nagios node with the nagiosadmin user and specified password. Real world usage would probably have several different monitored configurations, and restricted access to the nagios node.

Still to do...

Support for nagios is not complete (e.g. remote command configuration still needs to be added, and it has only been tested on Ubuntu), but I would appreciate any feedback on the general approach.

Syndicated 2010-08-18 00:00:00 from Hugo Duncan

A Clojure library for FluidDB

FluidDB, a "cloud" based triple-store, where the objects are immutable and can be tagged by anyone, launched about a month ago. As a another step to getting up to speed with Clojure, I decided to write a client library, and clj-fluiddb was born. The code was very simple, especially as I could base the library on cl-fluiddb, a Common-Lisp library.

I have some ideas I want to try out using FluidDB. It's permission system is one of it's best features, together with the ability to use it for RDF like triples means that it could provide a usable basis for growing the semantic web. My ideas are less grandiose, but might take as long to develop, we'll see...

Syndicated 2009-09-13 00:00:00 from Hugo Duncan

Product Development Flow

I have spent the last few months with my latest start-up, Artfox, where I have been trying to push home some of the lean start-up advice expounded by Eric Lie's and Steve Blank. I was hoping that "The Principles of Product Development Flow", by Donald Reinertsen, might help me in making a persuasive argument for some of the more troublesome concepts around minimum viable product and ensuring that feedback loops are in place with your customers as soon as possible. Unfortunately, I don't think that this is the book if you are looking for immediate, practical prescription, but it is a thought provoking, rigorous view of the product development process, that pulls together ideas from manufacturing, telecommunications and the Marines.

Perhaps Reinertsen's most accessible advice is that decisions in product development should be based on a strong economic foundation, pulled together by a concept of the "Cost of Delay". Rather than on relying on prescriptions for each of several interconnected metrics, such as efficiency and utilisation, Reinertsen suggests that economics will provide different targets for each of these metrics depending on the costs of the project at hand.

His proposition that product development organisations should measure "Design in Process", similar to the idea of "Intellectual Working In Process" proposed by Thomas Stewart in his book "Intellectual Capital", is what allows him to make the parallels to manufacturing and queueing theory and enables the application of the wide body of work in these fields to product development.

His practical advice, such as working in small batches and using a cadence for activities that require coordination, will come as no surprise to practitioners of agile development, and Reinertsen provides clear reasoning of why these practices work.

During my time at Alcan, and later Novelis, I gave a lot of thought to scheduling, queues and cycle times in a transformation based manufacturing environment, and I found that this had many parallels to his view of the product development process, and little in common with what Reinertsen describes as manufacturing, which seems to be limited to high volume assembly type operations. I found many ideas that could be usefully taken back to a manufacturing context.

If you look at this book as an introduction to scheduling, queueing theory and the reason's behind some of agile development practices, then you will not be disappointed.

Syndicated 2009-08-30 00:00:00 from Hugo Duncan

Rails Environments For Lisp

The facility of Ruby on Rails' test, development and production environments is one of those features that goes almost unremarked, but which makes using rails more pleasant. No doubt everyone has their own solution for this in other environments, and while I am sure Common Lisp is not lacking in examples, I have not seen an idiomatic implementation. In developing cl-blog-generator I came up with the following solution.

Configuration in Common Lisp usually depends on using special variables, which can be rebound across any block of code. I started by putting the configuration of my blog into s-expressions in files, but got tired of specifying the file names for different blogs. Instead, I created an association list for each configuration, and registered each using a symbol as key. I can now switch to a given environment by specifying the symbol for the environment.

The implementation (in src/configure.lisp under the GitHub repository) consists of two functions and a special variable. SET-ENVIRONMENT is used to register an environment, and CONFIGURE is used to make an environment active. The environments are stored in *ENVIRONMENTS* special as an association list. An example of setting up the configurations can be seen in the config.lisp file. In creating the configurations I drop the '*' from the special names.

I'm relatively new to CL, so let me now if I have overlooked anything. Writing this post makes me think I am missing a WITH-ENVIRONMENT macro ...

Syndicated 2009-04-07 00:00:00 from Hugo Duncan

cl-blog-generator Gets Comments

I have now added a comment system to cl-blog-generator. My requirements were for a simple, low overhead, commenting system, preferable one that could possibly be fully automated.

The comment system was inspired by Chronicle's, with a slight modification in approach - the comments are never saved on the web server, and are just sent by email to a dedicated email address. Spam filtering is delegated to the whatever spam filtering is implemented on the mail server, or in your email client. The comment emails are then processed in CL using mel-base and written to the local filesystem. Moderation can optionally occur on the CL side, if that is preferable to using the email client.

There is still some work left to do - I would like to be able to switch off comments on individual posts, either on demand on after a default time period - but I thought I would let real world usage drive my development.

Syndicated 2009-03-31 00:00:00 from Hugo Duncan

27 Mar 2009 (updated 27 Mar 2009 at 23:56 UTC) »

I recently uploaded some links to my cl-blog-generator project, and have been getting some feedback with comparisons to other blog site generators, or compilers, such as Steve Kemp's Chronicle, or Jekyll as used on GitHub Pages. Compared to these, cl-blog-generator is immature, but takes a different approach in several areas that Charles Stewart suggested might be worth exploring. I look forward to any comments you might have.

Formatting

All the blog generators seem to use a file based approach for writing content, but they differ in the choice of input formats supported, and in the approach to templating. cl-blog-generator is the least flexible, requiring input in XHTML, while Chronicle allows HTML, Textile or Markdown, and Jekyll Textile or Markdown. For templates, Chronicle uses Perl's HTML::Template, and Jekyll uses Liquid. cl-blog-generator uses an approach which substitutes content into elements identified with specific id's or classes, similar to transforming the templates with XSLT.

cl-blog-generator's choice of XHTML input was driven by a requirement to enable the validation of post content in the editor, which is not possible using Chronicle's HTML input because of the headers and lack of a body or head element, and a desire to be able to use any CSS tricks I wanted, which ruled out Textile and Markdown, or any other markup language. The lack of an external templating engine in cl-blog-generator was driven by simplicity; I couldn't see a use for conditionals or loops given the fixed structure of the content, and this choice leads to templates that validate, unlike Jekyll, and which are not full of HTML comments. The current id and class naming scheme in cl-blog-generator could certainly use some refinement to improve the flexibility of the output content format, and I would definitely welcome requests for enhancements should the scheme not fit your requirements.

Database and Two Phase Publishing

Perhaps the most significant difference in approach for cl-blog-generator is its use of a database and an explicit publish step. With cl-blog-generator a draft can exist anywhere in the filesystem, and must be "published" to be recognised by the blog site generator. The publishing process fills in some default metadata, such as post date, if this is not originally specified, copies the modified draft to a configurable location, and enters the metadata into the database. This ensures that the post is completely specified by its representation in the filesystem, and that the database is recreatable.

The database enables the partial regeneration of the site, without having to parse the whole site, and makes the linking of content much simpler. However, having Elephant as a dependency is probably the largest impediment to installation at present.

On Titles, Dates, Tags and Filenames

cl-blog-generator's input XHTML has been augmented to add elements for specifying post title, date, update date (which I believe is missing from the other systems), slug, description, and tags. On publising (see next section), any of these elements that is missing, except the mandatory title, is filled in with defaults.

Both Chronicle and Jekyll use a preamble to specify metadata, with the filename being used to generate the post's slug. Jekyll also uses the filename and its path for specifying the post date, and tags.

Bells and Whistles

Finally, here is a grab bag of features.

  • Chronicle comes with a commenting system.

  • cl-blog-generator generates a meta description element, which is used by search engines to generate link text. It also generates meta elements with links to the previous and next posts.

  • Jekyll has a "Related posts" feature for generating links to similar posts.

  • Chronicle and Jekyll both have migration scripts for importing content.
  • Chronicle has a spooler for posting pre- written content at specific times

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!