Data-specific database engineering

Posted 8 Oct 2002 at 23:23 UTC by mrsbrisby Share This

bandwidth monitoring is tricky. Collecting the data was easy- using packet sniffers and SNMP traps, but how do I go about storing 5,000 fixed-length records per minute (for the last 120), another 5,000 fixed-length records per hour (for the last 48), and another 5,000 fixed-length records per day (for every day, of every year).

For those of you without a calculator, that's 840,000 records for the start, and another 1,830,000 records per year.

If, on top of that, I want to maintain running averages for the days and years, I'll end up with an additional 240000 records for the start, and just double my yearly numbers

So how large is a record? Perl teaches us to be lazy. So my first attempt was simply using DB_File and textual strings (with Math::BigInt to store integer byte-counts).

I collected this data for about 4 months and ended up using only about 40 megs of disk space. It took about 15 minutes to sweep my network, and almost 40 seconds to read a segment of records back out.

Since this is for real-time graphing, I think everyone would agree that's not an option, so my second attempt used Postgres. Because BIGINT doesn't have enough precision, and I wanted updates to actually be atomic, I decided on a table that had 7 columns (period, address, b, kb, mb, gb, tb).

Lo and behold: Postgres was worse at it than DB_File. My database swelled to 700megs (indexing only on the address!) and while individual update-time went down (4 minutes), my search time went up.

That was tuned; By doing each update as a separate transaction I learned that the cost of transactions was not as great as the cost of logging in this case. This means that Postgres was spending much too much time doing disk operations.

I think to myself: I can do better than that! I know how my data is ordered, and by careful packing and a sparse matrix, I can use calculated seeks to obtain greater speed.

So try three was a sparse matrix. I kept a similar approach as with my sql table, and my record ended up taking 22 bytes. Combined with some simple paging, I neded up with a database that was 70 megs in size (rather: only a few pages were used on-disk, but eventually, several years will be in here)

My seek times went down on updates and reads. I could sweep the entire network in about 3-4 minutes, and lookups took so short-a-time that perl's times() function didn't give any useful results :)

But the monolithic nature of the database quickly took it's toll. I discovered I needed to cram a few extra bytes here and there. The fact that seeks were calculated lended well to it's speed, but expandability meant taking the database off-line (queueing the log) while we update it.

My most recent attempt puts each address into a separate file.

This moved my paging system back onto the filesystem, and a full sweep takes 7 minutes which is amazing to me to see the expense in just opening/closing 5000 files!

But instead of moving back to the monolithic structure, I can now use NFS and REXEC to have multiple machines handling the updates for only 200-400 addresses apiece.

Total update time? 1 minute, 17 seconds.

But that 1 minute, 17 seconds came at quite a price; I spent 12 hours analyzing this problem, no doubt a problem that has been thought of and re-thought of many times over. I'm aware of RRDTool, but it seems only well-suited for a small number of "devices".

How do other people do this? Does anyone else do this?

-mrsb


rrdtool, posted 9 Oct 2002 at 17:42 UTC by aero6dof » (Journeyer)

Have you heard of rrdtool? It's a little hard to learn at first, but it seems designed for this application - and it has a perl binding. I've been implementing a rrdtool based perl monitor for a client and it seems nice once you get a hang of it. It will even generate graphs of the data in the database. The website is here.

rrdtool, posted 9 Oct 2002 at 17:59 UTC by mrsbrisby » (Journeyer)

If you'll note my last paragraph says that I'm aware of RRDtool, but that it doesn't seem to handle a large number of devices well.

I have used (and continue to use) RRDTool+MRTG to watch several switches and other devices with a relatively small number of nodes, but for 5,000 distinct devices it does not appear to work well

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page