Older blog entries for k (starting at number 81)

One of the most annoying things about running traffic statistics over a proxy log is the use of "magic hostnames". There's a few examples of this worth mentioning:

  • blogspot/wordpress - one hostname per blog site
  • pheedo.com - using [md5].img.pheedo.com for an image URL
  • youtube/googlevideo/limewire/bitgravity/etc - CDNs which have many, many content servers which requests are directed to via some method

Caching issues aside, this makes it difficult to do straight "per-host" statistics as one can have an entire site "hidden" underneath many, many hostnames which individually serve a small amount of traffic.

Anyway. The naive way of working around this is to craft rules to aggregate specific domains as needed. I've done this in my stats software so I can generate daily statistic sets that are manageable. This gives me live data to play with. :)

Another way is to simply figure out the top level / second level domains and aggregate them at that level. So, you'd aggregate *.[domain].com ; *.[domain].net ; but not *.[domain].au. For .au you would aggregate *.[domain].com.au and so on. This should work fine (as the country domain name structure is reasonably static these days) but it does mean you end up hiding a lot of potential information from the domain name. For example, a CDN may have [server].[client].domain.com - seeing the specific per-client traffic statistics may help identify which websites are causing the traffic, versus just seeing *.domain.com.

Out of curiousity, I decided to come up with some way of figuring out where these domain names branch out into multiple sites. Partially so I can generate some aggregation rules, but partially so I can later on begin accounting traffic to both individual hosts and all parts of the domain name.

Anyway. This was easy to solve in a very, very naive way. I simply built a tree in memory based on the reversed hostname (so foo.domain.com -> moc.niamod.oof) so I could easily identify where in the name the branching occurs. I then just created one vertex per character.

Every time I added a hostname to the tree I incremented a reference count for all vertex nodes in the path.

Finally, to figure out which prefixes were the most prevalent, I simply depth-first searched the tree via recursion, looking for nodes that met a certain criteria - specifically, the node character was "." and the refcount was above some arbitrary hard coded limit (8). I then sorted the resulting list based on refcount.

The result:


844611: .com
82937: .net
51478: .org
36302: .blogspot.com
18525: .uk
17246: .wordpress.com
16527: .co.uk
15297: .info
15237: .ru
14790: .ningim.com
12359: .pheedo.com
12355: .img.pheedo.com
12328: .edu
9992: .files.wordpress.com
9980: .live.com
9578: .de
8430: .us
7171: .deviantart.com
6484: .photofile.ru
6481: .users.photofile.ru
6112: .profile.live.com
5197: .yuku.com
5044: .stats.esomniture.com
5044: .esomniture.com
4960: .avatar.yuku.com
4817: .bigtube.com
4659: .ss.bigtube.com
4541: .llnwd.net
4246: .au
4161: .vo.llnwd.net

.. so, as expected really. This host list doesn't include all of the seen hosts over a month of proxy access as I had done some pre-aggregation so the host list database was managable.

Now, the hacky part.

This wasn't a string B*Tree - the vertex children were -not- sorted. This means searches aren't as efficient as they could have been but for the data set in question (11 million hosts) the algorithm ran in O(perfectly sensible) time and didn't degrade noticably as the data set increased. Adding the extra code to do proper insertion and lookup optimisation would make it faster sure but I wanted to see if I had to care or not. No, I didn't have to care.

It was written in C. Yes, with no pre-defined tree datatypes. This is why I wanted to do the minimum effort required. :)

I initially had domain name fragments for the vertex nodes (ie, foo.domain.com became "foo"->"domain"->"com") but due to the distribution of the strings (ie, a -lot- under .com), the search speed (and thus insert speed) was very bad. I knew the worst case performance of the unsorted-node B*tree would be bad (ie, O(tree depth * number of entries per node)) and I absolutely, positively hit it with .com .

Finally, I needed some way of printing out the hostname given a particular vertex node. A typical method of doing this via recursion is to pass in a "prefix string" into the recursive function which gets modified as nodes are visited. I then realised I could simply traverse the tree backwards to assemble the string when needed. I didn't need to try and manage a variable-length string buffer; or artificially limit how long the string could be (and track that.)

In summary, this is all basic first year computer science data structures clue. It was an amusing way to spend an hour. It will be much more fun to merge this with the statistics code to provide domain-based aggregate statistics...

To the users of opensource:

Hi. Your generous donation of equipment and testing is appreciated. But please understand that software needs writing far before it needs testing. Unless we're writing device drivers or adapting old software to new hardware, we really, really need resources to help us -write- the software in the first place.

Writing software generally requires being employed somehow and paying bills. Please keep this in mind.

These particular email threads, when it eventually shows up in the mailing list archives, explains quite nicely why I stopped developing on Squid-3. This: This and then this.

I'm sorry guys - you first agreed on a basic roadmap and timeline for Squid-3.1 - and then in the middle of a "release candidate" release cycle you decide to add more damned features.

Considering exactly how successful previous "new features" have been in keeping the stability and performance of Squid-3 up there, I have absolutely no faith anymore in what they do.

30 Jun 2009 (updated 30 Jun 2009 at 14:17 UTC) »

I've been busy working on a bunch of stuff.

* The log analysis stuff is coming along nicely. Thankyou very much SQLite. I'll post a specific update or two about that when I've finished fixing the bugs I've introduced.

* I've modified the pygrub boot loader to understand FreeBSD disk labels. The hacking can be found at http://people.freebsd.org/~adrian/xen/ in the bsd_pygrub directory. It turns out that the pygrub/xen UFS code is (a) Solaris UFS, (b) UFS-1- only, (c) crashes very badly when fed a FreeBSD formatted UFS1 for some reason. I'll investigate that shortly. It is one more step towards sensible FreeBSD/Xen integration though!

* I've been fixing bugs and adding features to my Squid-2 fork, Lusca. I've found and fixed a couple of nasty bugs inherited from Squid-2.HEAD (especially one to do with 304 replies not making it back to the client!) and I've started documenting how all of the transparent hijacking/intercepting code works.

I'm doing some pretty heavy customisation of "Lightsquid", a GPL'ed squid logfile analysis tool. I'd like to be able to offer Xenion customers a decent set of management and reporting tools.

The Lightsquid interface is reasonably simple, fast and snappy. It captures the right amount of information for the average network/system administrator. There are a few problems though.

The HTML needs an overhaul. Its nested table hell. It is all done via a custom template engine so it shouldn't be too painful.

The parser seems to assume you're going to feed it all of the logs for a given day. If you feed it half a day at a time, the second import will over-write the first.

The data is stored in flat files, indexed by day. This is fine - a year is 365 directories - and trolling each directory to pull the daily stats isn't too bad. But the per-user statistics are kept in single files, one per user per day. Generating a monthly or yearly user report per user is a very, very expensive operation. Multiply that by a few thousand users and it just won't scale.

I'm going to have to abstract out the data storage and retrieval into a simple API; then implement a database backend for it. This API should implement an "add" functionality so I can handle adding data to an existing day repository.

There probably won't be much of the original Lightsquid code left when I'm eventually done with it.

Then once that is done, I can focus on some better monitoring and management tools.

I've been slowly trying to sort out the bugs I've been seeing in the FreeBSD Xen code. The first annoying crash(es) are in the netfront network driver.

I started digging in the driver and found that the bug in question is due to some reasonably brain dead behaviour in the TX mbuf ring allocation. I've committed code to FreeBSD-current which enforces that there's space in the TX mbuf ring. I'm about to commit further fixes to enforce that there's space in the Xenbus TX ring.

The current Linux code has the same flaw. It may not be so easily tickled in Linux due to the driver model but it should still be explicitly catered for. That said, I get plenty of random network interface hangs in Linux/Xen; perhaps this stuff is related.

I've spent the weekend building and testing a new VM server to deploy next week.

I've taken the oppertunity to test out FreeBSD/Xen support. I've put my images up here:

http://wiki.freebsd.or g/AdrianChadd/XenImages

It actually works, which is a good starting point!

18 May 2009 (updated 18 May 2009 at 13:22 UTC) »

.. hm, its been a while since I braindumped into here. I really should pick one or two blog sites (nerd, work, personal) instead of having dozens of sites spread around everywhere.

Anyway. I've just been bootstrapping FreeBSD-current/Xen and attempting to document the process so others can also test it out.

http://wiki.freebsd .org/AdrianChadd/XenHackery

I've also forked Squid-2 off into a separate project - lusca

I've also built a small open source CDN out of it which I need to spend more time on (but can't because I have to make money somehow..) - Cacheboy

I'm also doing bits of web/VPS hosting, squid/network/systems consulting and bits of other work - Xenion - this is how I'm trying to pay the bills so I can spend more time working on useful open source stuff.

More to come!

The status of the Squid cyclic fs (COSS) :

COSS was originally implemented as an on-disk LRU. I'll describe the original implementation as I grabbed from Eric Stern now.

A filesystem is just a single large file or physical device.

A membuf - 1 megabyte in size - is initially allocated to represent the first megabyte of the filesystem. Objects are copied into the membuf if their size is known up front (and thus space can be 'pre-allocated' in the stripe.) When the stripe is filled up it is marked as "full" and written to the filesystem. Objects are added to the beginning of a linked list as this happens.

Objects are referenced by their offset on the disk: any read is first checked against the in-memory membuf list. If an object is found to be in a membuf then a copy of the object data is taken and the data is handed back to Squid. If an object is not found in a membuf it is read from the filesystem, placed at the head of the current membuf - and they are re-added to the head of the linked list - and the squid file pointer is updated to point to this new position.

As stripes are successively allocated and written to the filesystem in order the 'popular' objects stay near the 'head'. This happens until the last stripe on disk is written: at which point the write pointer is cycled to the beginning of the filesystem.

At this point the LRU implementation kicks in: the objects which are at the end of this linked list match those at the beginning of the filesystem. COSS will start at the end of the linked list and move backwards, deallocating objects, until it reaches the beginning of the next stripe. It then has enough room to allocate a 1 megabyte stripe (and its membuf.) at the beginning of the disk. It then fills this membuf as described above.

When this membuf is filled it writes the stripe to disk, frees the objects in the next megabyte of disk and then allocates a membuf and fills that.

This implementation wasn't complete:

  • The rebuild-from-logfile didn't seem to work right
  • There was no support for rebuild-from-disk (in case the logfile was missing or corrupt)
  • The implementation used file_read() and file_write() - callback methods of scheduling disk filedescriptor IO - but assumed synchronous behaviour.

When I adapted the codebase to use POSIX AIO I discovered a number of race conditions in the COSS code:

  • Objects which were being read from disk and written into the current membuf had their squid file pointer numbers updated. Subsequent reads of this object would be copied from the current membuf - but async disk IO wouldn't guarantee the data there was written until some time after scheduling. This resulted in a lot of swapin failures as NULL data was being written
  • It was possible, but so far unlikely, that a disk operation would be scheduled for an object which was then overwritten by a full stripe.

The nice features of COSS was the simple writing and object pool maintainence: writes were aggregated and predictable (being in 1 megabyte chunks.) Popular objects had more of a possibility of staying in the current membuf.

I recently took the code and began fixing the bugs. These included:

  • All disk stripes were now an even multiple of the membuf size (1 megabyte.) Eric's original implementation would note when a membuf was free, write the membuf to disk and then start the new membuf at the end of the old membuf. This meant a few bytes weren't being wasted but it did make dividing the filesystem up for analysis/repair/rebuild difficult.
  • Object relocations are now tracked from their creation to completion
  • When an object is relocated its data - and any subsequent read request - is stalled until the object data has been read in from the filesystem.
  • A check (and loud log message!) has been added to catch attempts to write a stripe where a pending relocate is occuring (and the read hasn't completed), hopefully catching (but not repairing for now) instances where said read will result in then-bogus data
  • Rebuild logic has been added - its now easy to read the disk in as 1 megabyte chunks and do basic checks on each stripe. If a stripe has been partially or badly written to disk the contents can be thrown away without affecting the rest of the filesystem
  • Objects no longer live in a single linked list. Each on-disk stripe reigon has an in-memory structure used to track various statistics including a linked list containing which objects are currently there. This makes freeing any arbitrary stripe easy, allowing for much cleaner object expiry and filesystem index rebuild process.

The problems seen so far:

  • The write rate is a function of not only the cacheable data coming in from the network but the hit rate - and subsequent relocation of popular objects - which means the write volume can quickly spiral out of control
  • Some hit-rate issues which I haven't figured out yet. It may be related to my relatively small test caches (~ 8-10 gigabytes) and the polygraph workloads using a much bigger cache data set.

Possible directions to take (although I do need some actual world-testing and statistics first!):

  • Find out what percentage of objects are read in and never referenced again vs objects re-referenced once, twice, four times, eight times, etc.
  • Consider allocating stripe areas as "hot object" stripes which aren't part of the disk LRU. Place popular objects in these stripes and don't relocate them once they're there - this should cut down on the constant object relocation and therefore cut back on the write bandwidth. They can be managed by a different LRU or other replacement scheme.
  • Consider implementing some form of object locality; which will need cooperation from other areas of squid.

Interested in the work? I'm placing snapshots up on my squid website - here.

Henrik Nordstrom pointed out the pread() and pwrite() syscalls which should be supported under Linux. This code now can either use the POSIX AIO calls or the Linux AUFS (user-space threads implementing disk IO) which use pread()/pwrite().

In short; it works, and it works fine.

The trouble now: how to rebuild the store index from disk during startup.

72 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!