IlyaM is currently certified at Journeyer level.

Name: Ilya Martynov
Member since: 2003-02-27 11:31:21
Last Login: 2010-02-11 17:07:19

FOAF RDF Share This

Homepage: http://martynov.org/

Projects

Recent blog entries by IlyaM

Syndication: RSS 2.0

MongoDB pain points

Recently I was contacted by 10gen's account manager soliciting a feedback on our use of MongoDB at the company I'm working at. I wrote a lengthy reply on what we do with MongoDB and the problems we see with it and never heard back. It was a shame to all these feedback to go to waste so I decided to repost it with minor edits in my blog. So here it comes ...

We are using MongoDB at IPONWEB for quite long time - for ~2 years already for a number of high loaded projects. Our company specializes at creating products for display advertising and mostly we are using MongoDB to keep track of user data in our adservers. The main reason we are using MongoDB is raw performance. We are using MongoDB mostly as dumb NoSQL key-value database where we try to keep data fully cached in RAM. With rare exceptions we are not using any fancy features like complex queries, map-reduce and so on but rather limit ourselves to queries by a primary key. We do use sharding because as I mentioned above we have to put whole database to RAM so we often have to split database across multiple servers. Generally we are very price sensitive about costs of the installation so we are always looking at reducing hardware costs for our databases. Giving this background the following limitations in MongoDB implementation cause us the most grief:

a) lack of online database defragmentation in MongoDB. Currently the only way to compact MongoDB database is to stop the database and run compact or repair. On our datasets this process runs for considerable time. We do have to defragment  database pretty often to keep RAM usage under control. Fragmented vs non-fragmented database can be easily be two times bigger what in our case means two time higher hardware costs.

b) realistically for our use case we can do MongoDB resharding only offline. Adserving is extremely sensitive to any latencies and if we add more shards to existing cluster we more or less forced to take the application offline until resharding finishes.

c) lack of good support of SSD. The way MongoDB works now switching from using more RAM with HDD as backing storage in favor of using less RAM with SSD backing storage doesn't seem to be cost effective. SSD if priced per 1GB is roughly two times cheaper then RAM but if we place our data on SSD we have to reserve at least two more time space on SSD if we want to be able to run repair on the data (this is because running repair requires two times more space). Other reason we considered using SSD as backing storage instead of HDD is write performance in some applications where it was a limitation. But from our limited benchmarking we found small performance difference because it looks like single thread write lock in MongoDB becomes a bottleneck rather then underlying storage.

d) minor point: underlying networking protocol could be more efficient with some optimizations. If you send many small queries and get small documents as result MongoDB creates separate TCP packets for each request/response. Under high load especially in case of virtualized hardware (i.e. EC2) this introduces additional high overhead. We have our own client driver which tries to pack multiple requests in single TCP packets and it makes noticeable difference in performance on EC2. But this is a partial solution because responses from MongoDB and communications between mongos and mongod are still inefficient.

e) another minor point: BSON format is very wasteful in terms of memory usage. Giving that we try to minimize our database sizes to reduce hardware costs the rest trend in our use of MongoDB is instead of representing data as BSON document do serialization to some more compact format and instead store our data as big binary blobs (i.e. to simplify our documents look like { _id = '....', data = '... serialized data ...'}

By the way at some point we evaluated switching to CitrusLeaf. This product supposedly addresses some of the above issues (mostly a, b and c) but it seems that expected savings in hardware costs would be offset by licensing costs so at least for now we are not going to.

Syndicated 2012-05-25 15:04:00 from Ilya Martynov's blog

MongoDB client library: C vs C++

I've been playing a bit with MongoDB recently. Particularly I've looked into source code for client libraries as I was interested in how hard is to change the client API to support async mode of operation. One thing I noticed is that C version of the client library compared to C++ version is shorter and much easier to read. I cannot shake off the feeling that sometimes C++ feels like a step backwards compared to C.

Syndicated 2010-02-10 14:41:00 from Ilya Martynov's blog

Running Puppet on big scale

This is a rehash of my comment in slashdot discussion and my comment on Alexey Kovrygin's blog post.

We run Puppet on hundreds of servers in two datacenters and it was pain to get it working right. There are many issues which show up here and there: memory leaks in both client (puppetd) and server (puppetmaster), periodic lock ups and even file corruption. Besides it is quite slow. These problems are being slowly fixed with each new release but right now using Puppet for big installations is a source of constant problems. Unfortunately you do not notice these problems until you get many servers to manage; on smaller installations it seems to work without problems or at least they happen much less often to be noticable. In our case number of servers we managed increased slowly so we felt into the trap and now rely on Puppet too much and now it is too late to change. At the end we have managed to work around of most of issues in the Puppet we have met so combined with monitoring to catch problems it works good enough for us. On the other hand if I were to start from scratch I would evaluate something different for the project. Perhaps I would use Cfengine. It is not as flexible and nice as puppet but probably is more stable simply because it is much more old. I talked to people who used Cfengine on much bigger scale (thousands of servers) and they did not recall stability problems with it. In the long run Puppet will be probably ok too as it is being developed actively but right now I'd consider it to be in “beta” state. Or maybe even in "alpha".

For anyone interested in how to get Puppet work for real work load this is what we do:


  • We run Puppet under Apache+Mongrel. By default it runs using WEBrick what breaks easily under any moderate load. So we use Apache+Mongrel instead of it. Another benefit of using Apache is that you can run multiple backends. This helps if you have multi-core server for puppetmaster as by itself it can use only one core. Alternatively you can use Nginx+Mongrel or another other web server with proxying capabilities + Mongrel.

  • Because Puppet is slow we load balance it across two boxes in each datacenter.

  • We restart backends from time to time because they leak memory. We have a cron job to do this every 15 minutes (yes, it is that bad).

  • Puppetmaster has a cache which we saw to get corrupted sometimes. Our "fix" is to delete it before each restart. This might be fixed in later version: I've seen some closed bug reports which loooked relevant but we still have this cache clean up just in case.

  • We do not run puppet client as daemon. We run it as a cron job. Puppet client when run as daemon leaks memory and gets stuck from time to time. In our cron job we add random sleep before starting client to make sure requests do not hit server at the same time and overload it.

  • We never serve big files over puppet using its fileserver. Puppet does a number of stupid things with big files like say reading them into memory first before serving it to puppet client. If you need to distribute big files use other means (HTTP, FTP, NFS, etc).

Syndicated 2009-04-08 12:50:00 from Ilya Martynov's blog

STOMP messaging for non-Java programmers on top of Apache ActiveMQ

Recently I was researching available options for messaging between Perl programs. In the past I had quite a lot of experience with Spread and I don't want to repeat. I hated Spread as it was buggy and unstable. So I looked into other alternatives: XMPP, STOMP and AMQP. AMQP has no Perl client so it was out. STOMP and XMPP are closely tied in my view but then STOMP looked simplier so I decided to go with STOMP. There is very good Perl client library for STOMP: Net::STOMP.

Then there is a choose of the server. This is quite an important choice and here is why: STOMP is theoretically language agnostic protocol but in reality you are very likely to depend on semantics of specific STOMP server implementation. For example like I mention below STOMP protocol doesn't really define any rules of message delivery.

There are several servers which support STOMP but Apache ActiveMQ looked to me like one of the most robust implementations. While Apache ActiveMQ supports a wide range of interfaces its design is centered around JMS and it helps to understand basic concepts of JMS even if you use STOMP only. This was a problem for me and as I don't really program in Java and all JMS concepts were alien to me. Moreover most of documentation on STOMP and ActiveMQ takes for granted that you know JMS basics.

So I'm recording all my finding on STOMP/ActiveMQ from viewpoint of non-Java programmer. I hope it might be helpful for other non-Java programmers. Word of warning: all below might be specific to Apache ActiveMQ implementation of STOMP server. I didn't bother to check other STOMP servers.

Basic model

As I mentioned earlier STOMP protocol by itself doesn't specify rules of message delivery. It is up to STOMP server to define the rules. This is where JMS API model becomes important as STOMP implementation is basically just a mapping of JMS model to non-Java specific protocol. Below is the short summary of API model which is relevant to STOMP clients (this is mostly based on my reading of JMS tutorial, STOMP protocol description and description of JMS extensions to STOMP).

There are two distinct ways to organize messaging:

  1. Use queues. If one message gets into queue, only one of subscribers gets it. If there are no subscribers then server stores the message until someone shows up.

  2. Use topics. For each message sent to the topic all active (i.e. connected) subscribers get a copy of it. Actually non-active subscribers can get a copy as well if they register their subscription as durable in advance. If there are no subscribers message gets lost.

How do use queues and topics in STOMP client? It is all controlled by destination you specify when subscribing to messages or sending messages. Destinations like /queue/* act as queues. Destinations like /topic/* act as topics.

There is also a concept of temporary queues and topics in JMS. The idea is that they are visible only to connection which creates them so that client might have private queues and topics. I'm not sure if this is exposed to STOMP clients at all. It might be - I haven't researched this as I don't need it in my application.

Control over reliability of messaging

JMS API gives you some control over reliability of messaging and at least some of it is exposed to STOMP layer.

Message acknowledgement: STOMP client on subscription tells if it acknowledges messages automatically or not. Automatic means that messages is considered delivered even if subscriber doesn't actually read it. I guess there are cases when it makes sense but I'd argue that default behavior should be opposite as for most applications it doesn't.

Message persistence: if STOMP server dies it either losses undelivered messages or it rereads them from some permanent storage. Message persistence controls this.

Message priority: in theory JMS provider tries to deliver higher-priority messages before lower-priority. In practice I have no idea - I didn't research how ActiveMQ implements this as it is not important for my application. Anyway this bit is exposed into STOMP protocol as well.

Message expiration: this defines for how long time server keeps undelivered messages.

Transactions: not sure about this one. Both JMS and STOMP support concept of transactions but I'm not sure what is the exact overlap. I might look into this later but for my application transactions are probably not important.

Configuring ActiveMQ as a STOMP server

Latest version (5.2) seems to support STOMP out of box without need for any additional configuration. As a quick test you can run the following program. It is just a copy&paste from Net::STOMP perldoc - I'm adding it here in case they change perldoc later:

# send a message to the queue 'foo'
use Net::Stomp;
my $stomp = Net::Stomp->new( { hostname => 'localhost', port => '61613' } );
$stomp->connect( { login => 'hello', passcode => 'there' } );
$stomp->send(
    { destination => '/queue/foo', body => 'test message' } );
$stomp->disconnect;

# subscribe to messages from the queue 'foo'
use Net::Stomp;
my $stomp = Net::Stomp->new( { hostname => 'localhost', port => '61613' } );
$stomp->connect( { login => 'hello', passcode => 'there' } );
$stomp->subscribe(
    {   destination             => '/queue/foo',
        'ack'                   => 'client',
        'activemq.prefetchSize' => 1
    }
);
while (1) {
  my $frame = $stomp->receive_frame;
  warn $frame->body; # do something here
  $stomp->ack( { frame => $frame } );
}
$stomp->disconnect;

Default installation doesn't seem to do any authorization so any login/passcode works.

Syndicated 2009-03-25 15:07:00 from Ilya Martynov's blog

Erlang debugging tips

I've just started playing with Erlang so I have a lot to discover but so far I've found several things which help me to debug my programs:


  1. I tried to write my programs using OTP principles but the problem for me was that by default it often causes Erlang to hide most of the problems. The faultly process just get silently restarted by its supervisor or even worse - the whole application just exits with unclear "shutdown temporary" message. The solution is simple: start sasl application and it'll start logging all crashes. For development starting Erlang shell as erl -boot start_sasl does the trick.
  2. If you compile your modules with debug_info switch then you can use quite nifty visual debugger to step through your program. Quick howto: you open debugger window with Erlang console command im() and then you add modules for inspection via menu Module/Interpret. Then you can either add breakpoints manually or configure debugger to auto attach on one of conditions (say on first call). Instead of clicking menus you can also use Erlang console commands to control the debugger. See i:help().
  3. With command appmon:start() you can launch visual application monitor which shows all active applications. One particular useful thing is ability to click on application what shows a tree of processes it consist of. Then you can enable tracing of individual processes. When tracing is enabled it seems to be showing messages send or recieved by the traced process.

Syndicated 2008-11-17 11:44:00 from Ilya Martynov's blog

8 older entries...

 

IlyaM certified others as follows:

  • IlyaM certified cwinters as Journeyer
  • IlyaM certified ask as Journeyer
  • IlyaM certified chromatic as Journeyer
  • IlyaM certified jmcnamara as Journeyer
  • IlyaM certified IlyaM as Journeyer
  • IlyaM certified pudge as Journeyer
  • IlyaM certified merlyn as Master
  • IlyaM certified japhy as Journeyer
  • IlyaM certified Simon as Master
  • IlyaM certified Spoon as Journeyer
  • IlyaM certified autarch as Journeyer
  • IlyaM certified petdance as Journeyer
  • IlyaM certified markjugg as Apprentice
  • IlyaM certified chaoticset as Apprentice
  • IlyaM certified adrianh as Journeyer

Others have certified IlyaM as follows:

  • dtucker certified IlyaM as Apprentice
  • IlyaM certified IlyaM as Journeyer
  • Spoon certified IlyaM as Journeyer
  • markjugg certified IlyaM as Journeyer
  • adrianh certified IlyaM as Master
  • chaoticset certified IlyaM as Journeyer
  • autarch certified IlyaM as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page