Older blog entries for mikal (starting at number 1096)

A pythonic example of recording metrics about ephemeral scripts with prometheus

In my previous post we talked about how to record information from short lived scripts (I call them ephemeral scripts by the way) with prometheus. The example there was a script which checked the SMART status of each of the disks in a machine and reported that via pushgateway. I now want to work through a slightly more complicated example.

I think you hit the limits of reporting simple values in shell scripts via curl requests fairly quickly. For example with the SMART monitoring script, SMART is capable of returning a whole heap of metrics about the performance of a disk, but we boiled that down to a single "health" value. This is largely because writing a parser for all the other values that smartctl returns would be inefficient and fragile in shell. So for this post, we're going to work through an example of how to report a variety of values from a python script. Those values could be the parsed output of smartctl, but to mix things up a bit, I'm going to use a different script I wrote recently.

This new script uses the Weather Underground API to lookup weather stations near my house, and then generate graphics of the weather forecast. These graphics are displayed on the various Cisco SIP phones I already had around the house. The forecasts look like this:



The script to generate these weather forecasts is relatively simple python, and you can see the source code on github.

My cunning plan here is to use prometheus' time series database and alert capabilities to drive home automation around my house. The first step for that is to start gathering some simple facts about the home environment so that we can do trending and decision making on them. The code to do this isn't all that complicated. First off, we need to add the python prometheus client to our python environment, which is hopefully a venv:

pip install prometheus_client
pip install six


That second dependency isn't a strict requirement for prometheus, but the script I'm working on needs it (because it needs to work out what's a text value, and python 3 is bonkers).

Next we import the prometheus client in our code and setup the counter registry. At the same time I record when the script was run:

from prometheus_client import CollectorRegistry, Gauge, push_to_gateway

registry = CollectorRegistry()
Gauge('job_last_success_unixtime', 'Last time the weather job ran',
      registry=registry).set_to_current_time()


And then we just add gauges for any values we want to add to the pushgateway

Gauge('_'.join(field), '', registry=registry).set(value)


Finally, the values don't exist in the pushgateway until we actually push them there, which we do like this:

push_to_gateway('localhost:9091', job='weather', registry=registry)


You can see the entire patch I wrote to add prometheus support on github if you're interested in an example with more context.

Now we can have pretty graphs of temperature and stuff!

Tags for this post: prometheus monitoring python pushgateway
Related posts: Recording performance information from short lived processes with prometheus; Basic prometheus setup; Implementing SCP with paramiko; Mona Lisa Overdrive; Packet capture in python; mbot: new hotness in Google Talk bots

Comment

Syndicated 2017-01-30 01:08:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Recording performance information from short lived processes with prometheus

Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome.

The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really.

First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time:

$ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz
$ tar xvzf pushgateway-0.3.1.linux-386.tar.gz
$ cd pushgateway-0.3.1.linux-386
$ ./pushgateway


Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts.

The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway:

echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job


This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL:

# TYPE some_metric untyped
some_metric{instance="",job="some_job"} 3.14


You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first.

On jobs and instances

Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances.

For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers.

So, we go from something like this:



To an architecture which looks a bit like this:



Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures:



So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way to lookup jobs and instances via DNS, but that's a story for another day.

To harp on a point here, all of these processes would be running a web server exporting metrics in google land -- that means that prometheus would know that its monitoring a frontend job because it would be listed in the configuration file as such. You can see this in the configuration file from the previous post. Here's the relevant snippet again:

  - job_name: 'node'
    static_configs:
      - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']


The job "node" runs on three targets (instances), named "molokai:9100", "dell:9100", and "eeebox:9100".

However, we live in the ghetto for these ephemeral scripts and want to use the pushgateway for more than one such script, so we have to tell lies via the pushgateway. So for my simple emphemeral script, we'll tell the pushgateway that the job is the script name and the instance can be an empty string. If we don't do that, then prometheus will think that the metric relates to the pushgateway process itself, instead of the ephemeral process.

We tell the pushgateway what job and instance to use like this:

echo "some_metric 3.14" | curl --data-binary @- http://localhost:9091/metrics/job/frontend/instance/0


Now we'll get this at the metrics URL:

# TYPE some_metric untyped
some_metric{instance="",job="some_job"} 3.14
some_metric{instance="0",job="frontend"} 3.14


The first metric there is from our previous attempt (remember when I said that values are never cleared out?), and the second one is from our second attempt. To clear out values you'll need to restart the pushgateway process. For simple ephemeral scripts, I think its ok to leave the instance empty, and just set a job name -- as long as that job name is globally unique.

We also need to tell prometheus to believe our lies about the job and instance for things reported by the pushgateway. The scrape configuration for the pushgateway therefore ends up looking like this:

  - job_name: 'pushgateway'
    honor_labels: true
    static_configs:
      - targets: ['molokai:9091']


Note the honor_labels there, that's the believing the lies bit.

There is one thing to remember here before we can move on. Job names are being blindly trusted from our reporting. So, its now up to us to keep job names unique. So if we export a metric on every machine, we might want to keep the job name specific to the machine. That said, it really depends on what you're trying to do -- so just pay attention when picking job and instance names.

On metric types

Prometheus supports a couple of different types for the metrics which are exported. For now we'll discuss two, and we'll cover the third later. The types are:

  • Gauge: a value which goes up and down over time, like the fuel gauge in your car. Non-motoring examples would include the amount of free disk space on a given partition, the amount of CPU in use, and so forth.
  • Counter: a value which always increases. This might be something like the number of bytes sent by a network card -- the value only resets when the network card is reset (probably by a reboot). These only-increasing types are valuable because its easier to do maths on them in the monitoring system.
  • Histograms: a set of values broken into buckets. For example, the response time for a given web page would probably be reported as a histogram. We'll discuss histograms in more detail in a later post.


I don't really want to dig too deeply into the value types right now, apart from explaining that our previous examples haven't specified a type for the metrics being provided, and that this is undesirable. For now we just need to decide if the value goes up and down (a gauge) or just up (a counter). You can read more about prometheus types at https://prometheus.io/docs/concepts/metric_types/ if you want to.

A typed example

So now we can go back and do the same thing as before, but we can do it with typing like adults would. Let's assume that the value of pi is a gauge, and goes up and down depending on the vagaries of space time. Let's also show that we can add a second metric at the same time because we're fancy like that. We'd therefore need to end up doing something like (again heavily based on the contents of the README):

cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/frontend/instance/0
# TYPE some_metric gauge
# HELP approximate value of pi in the current space time continuum
some_metric 3.14
# TYPE another_metric counter
# HELP another_metric Just an example.
another_metric 2398
EOF


And we'd end up with values like this in the pushgateway metrics URL:

# TYPE some_metric gauge
some_metric{instance="0",job="frontend"} 3.14
# HELP another_metric Just an example.
# TYPE another_metric counter
another_metric{instance="0",job="frontend"} 2398


A tangible example

So that's a lot of talking. Let's deploy this in my home lab for something actually useful. The node_exporter does not report any SMART health details for disks, and that's probably a thing I'd want to alert on. So I wrote this simple script:

#!/bin/bash

hostname=`hostname | cut -f 1 -d "."`

for disk in /dev/sd[a-z]
do
  disk=`basename $disk`

  # Is this a USB thumb drive?
  if [ `/usr/sbin/smartctl -H /dev/$disk | grep -c "Unknown USB bridge"` -gt 0 ]
  then
    result=1
  else
    result=`/usr/sbin/smartctl -H /dev/$disk | grep -c "overall-health self-assessment test result: PASSED"`
  fi

  cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/$hostname/instance/$disk
  # TYPE smart_health_passed gauge
  # HELP whether or not a disk passed a "smartctl -H /dev/sdX"
  smart_health_passed $result
EOF
done


Now, that's not perfect and I am sure that I'll re-write this in python later, but it is actually quite useful already. It will report if a SMART health check failed, and now I could write an alerting rule which looks for disks with a health value of 0 and send myself an email to go to the hard disk shop. Once your pushgateways are being scraped by prometheus, you'll end up with something like this in the console:



I'll explain how to turn this into alerting later.

Tags for this post: prometheus monitoring ephemeral_script pushgateway
Related posts: Basic prometheus setup; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World; The Ghost Brigades

Comment

Syndicated 2017-01-27 20:17:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Basic prometheus setup

I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else.

The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data.

The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing:

$ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz
$ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz
$ cd node_exporter-0.14.0-rc.1.linux-386
$ ./node_exporter


That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics:

$ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"'
node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11


Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on.

So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure.

Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial:

$ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz
$ tar xvzf prometheus-1.5.0.linux-386.tar.gz
$ cd prometheus-1.5.0.linux-386


Now we need to move some things around to install this nicely. I did the puppet equivalent of:

  • Moving the prometheus file to /usr/bin
  • Creating an /etc/prometheus directory and moving console_libraries and consoles into it
  • Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second
  • And creating an empty data directory, in my case at /data/prometheus


The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one:

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'stillhq'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['molokai:9090']

  - job_name: 'node'
    static_configs:
      - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']


Here you can see that I want to scrape each of my web servers which exports metrics every 15 seconds, and I also want to calculate values (such as firing alerts) every 15 seconds too. This might not scale if you have bajillions of processes or machines to monitor. I also label all of my values as coming from my domain, so that if I ever aggregate these values with another prometheus from somewhere else the origin will be clear.

The other interesting bit for now is the scrape configuration. This lists the metrics exporters to monitor. In this case its prometheus itself (molokai:9090), and then each of my machines in the home lab (molokai, dell, and eeebox -- all on port 9100). Remember, port 9090 is the prometheus binary itself and port 9100 is that node_exporter binary we now have running on all of our machines.

Now if we start prometheus, it will do its thing. There is some configuration which needs to be passed on the command line here (instead of in the configration file), so my command line looks like this:

/usr/bin/prometheus -config.file=/etc/prometheus/prometheus.yml \
    -web.console.libraries=/etc/prometheus/console_libraries \
    -web.console.templates=/etc/prometheus/consoles \
    -storage.local.path=/data/prometheus


Prometheus also presents an interactive user interface on port 9090, which is handy. Here's an example of it graphing the load average on each of my machines (it was something which caused a nice jaggy line):



You can see here that the user interface has a drop down for selecting values that are known, and that the key at the bottom tells you things about each time series in the graph. So for example, if we added {instance="eeebox:9100"} to the end of the value in the text box at the top, then we'd be filtering for values with that label set, and would as a result only show one value in the graph (the one for eeebox).

If you're interested in very simple dashboarding of basic system metrics, that's actually all you need to do. In my next post about prometheus I'm going to show how to write your own binary which exports values to be graphed. In my case, the temperature outside my house.

Tags for this post: prometheus monitoring
Related posts: Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World; The Ghost Brigades ; Friday

Comment

Syndicated 2017-01-26 21:23:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Gods of Metal




ISBN: 9780141982267
LibraryThing
In this follow-up to Command and Control, Schlosser explores the conscientious objectors and protestors who have sought to highlight not just the immorality of nuclear weapons, but the hilariously insecure state the US government stores them in. In all seriousness, we are talking grannies with heart conditions being able to break in.

My only real objection to this book is that is more of a pamphlet than a book, and feels a bit like things that didn't make it into the main book. That said, it is well worth the read.

Tags for this post: book eric_schlosser nuclear weapons safety protest
Related posts: Command and Control; Random linkage; Fast Food Nation; Starfish Prime; Why you should stand away from the car when the cop tells you to; Random fact for the day


Comment

Syndicated 2017-01-23 02:38:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

18 Dec 2016 (updated 18 Dec 2016 at 08:07 UTC) »

A Walk in the Woods




ISBN: 9780307279460
LibraryThing
I found this tale of Bill Bryson walking the Appalachian Trail (rather incompetently I must say) immensely entertaining. Well written, interesting, generally exaggerated, and leaving me with a desire to get out somewhere and walk some more. I'd strongly recommend this book to people who already care about bush walking, but have found other pursuits to occupy most of their spare time.

Tags for this post: book bill_bryson travel america bush walking
Related posts: Exploring for a navex; Where did SUVs come from?; In A Sunburned Country; Richistan; Why American tech companies seem to get new technology better than Australian ones...; I should try to make it to then 911 exhibit


Comment

Syndicated 2016-12-17 22:41:00 (Updated 2016-12-18 08:07:41) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Leviathan Wakes




ISBN: 9780316129084
LibraryThing
I read this book based on the recommendation of Richard Jones, and its really really good. A little sci-fi, a little film noir, and very engaging. I also like that bad things happen to good people in the story -- its gritty and unclean enough to be believable.

I don't want to ruin the book for anyone, but I really enjoyed this and have already ordered the sequels. Oh, and there's a Netflix series based off these books that I'll now have to watch too.

Tags for this post: book james_sa_corey colonization space_travel mystery aliens first_contact
Related posts: Marsbound; Downbelow Station; The Martian; The Moon Is A Harsh Mistress; Starbound; Rendezvous With Rama


Comment

Syndicated 2016-12-10 21:16:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Oryx and Crake




ISBN: 9780385721677
LibraryThing
I bought this book ages ago, on the recommendation of a friend (I don't remember who), but I only just got around to reading it. Its a hard book to read in places -- its not hopeful, or particularly fun, and its confronting in places -- especially the plot that revolves around child exploitation. There's very little to like about the future society that Atwood posits here, but perhaps that's the point.

Despite not being a happy fun story, the book made me think about things like genetic engineering in a way I didn't before and I think that's what Atwood was seeking to achieve. So I'd have to describe the book as a success.

Tags for this post: book margaret_atwood apocalypse genetic_engineering
Related posts: The Exterminator's Want Ad; Cyteen: The Vindication; East of the Sun, West of the Moon; The White Dragon; Runner; Cyteen: The Betrayal


Comment

Syndicated 2016-05-27 03:07:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Potato Point

I went to Potato Point with the Scouts for a weekend wide game. Very nice location, apart from the ticks!

                                       

See more thumbnails

Tags for this post: blog pictures 20160523 photo coast scouts bushwalk
Related posts: Exploring the Jagungal; Scout activity: orienteering at Mount Stranger

Comment

Syndicated 2016-05-22 18:21:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

High Output Management




ISBN: 9780679762881
LibraryThing
A reading group of managers at work has been reading this book, except for the last chapter which we were left to read by ourselves. Overall, the book is interesting and very readable. Its a little dated, being all excited with the invention of email and some unfortunate gender pronouns, but if you can get past those minor things there is a lot of wise advice here. I'm not sure I agree with 100% of it, but I do think the vast majority is of interest. A well written book that I'd recommend to new managers.

Tags for this post: book andy_gove management intel non_fiction
Related posts: Being Geek; On Cars; Why document management is good; The Man in the Rubber Mask; Perl sample source code; Cataloguing meta data against multi media formats


Comment

Syndicated 2016-04-23 01:30:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Bad Pharma




ISBN: 9780007350742
LibraryThing
Another excellent book by Ben Goldacre. In this book he argues that modern medicine is terribly corrupted by the commercial forces that act largely unchecked in the marketplace -- studies which don't make a new drug look good go missing; new drugs are compared only against placebo and not against the current best treatment; doctors are routinely bribed with travel, training and small perks. Overall I'm left feeling like things haven't improved much since this book was published, given that these behaviors still seem common.

The book does offer concrete actions that we could take to fix things, but I don't see many of these happening any time soon, which is a worrying place to be. Overall, a disturbing but important read.

Tags for this post: book ben_goldacre medicine science corruption non_fiction
Related posts: Bad Science; Sixty five roses (Cystic Fibrosis); On Cars; Being Geek; Audio from linux.conf.au 2005 continued; Lemon juice as a cure for AIDS?


Comment

Syndicated 2016-04-20 16:53:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

1087 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!