Older blog entries for IlyaM (starting at number 8)

Erlang debugging tips

I've just started playing with Erlang so I have a lot to discover but so far I've found several things which help me to debug my programs:


  1. I tried to write my programs using OTP principles but the problem for me was that by default it often causes Erlang to hide most of the problems. The faultly process just get silently restarted by its supervisor or even worse - the whole application just exits with unclear "shutdown temporary" message. The solution is simple: start sasl application and it'll start logging all crashes. For development starting Erlang shell as erl -boot start_sasl does the trick.
  2. If you compile your modules with debug_info switch then you can use quite nifty visual debugger to step through your program. Quick howto: you open debugger window with Erlang console command im() and then you add modules for inspection via menu Module/Interpret. Then you can either add breakpoints manually or configure debugger to auto attach on one of conditions (say on first call). Instead of clicking menus you can also use Erlang console commands to control the debugger. See i:help().
  3. With command appmon:start() you can launch visual application monitor which shows all active applications. One particular useful thing is ability to click on application what shows a tree of processes it consist of. Then you can enable tracing of individual processes. When tracing is enabled it seems to be showing messages send or recieved by the traced process.

Syndicated 2008-11-17 11:44:00 from Ilya Martynov's blog

STL strings vs C strings for parsing

I'm working on a project where I need to build custom high performance HTTP server. One piece of this server is a parser for URLs in incoming requests. It is very simple and on the first glance it shouldn't be that slow compared with other parts of the server. Yet it was taking quite a lot of CPU according to the profiler. The parser is using STL and basically does several string::find() calls to find parts of URL. So I thought maybe string::find() is too slow and decided to benchmark it against strchr(). This is my benchmark code:


#include <string.h>
#include <string>
#include <time.h>
#include <iostream>

using std::string;
using std::cout;

int main() {
const char* str1 = " a ";
const string& str2 = str1;

const unsigned long iterations = 500000000l;

{
clock_t start = clock();

for (unsigned long i = 0; i char* pos = strchr(str1, 'a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}

{
clock_t start = clock();

for (unsigned long i = 0; i string::size_type pos = str2.find('a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}
}

Turns out strchr is much faster as long as the benchmark code is compiled with optimizations on:

ilya@denmark:~$ g++ -O3 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.5 sec
Iterations: 500000000 it
Time per iteration: 3.1e-05 msec
Rate: 3.22581e+07 it/sec

ilya@denmark:~$ g++ -O2 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.76 sec
Iterations: 500000000 it
Time per iteration: 3.152e-05 msec
Rate: 3.17259e+07 it/sec

ilya@denmark:~$ g++ -O1 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 19.23 sec
Iterations: 500000000 it
Time per iteration: 3.846e-05 msec
Rate: 2.6001e+07 it/sec

ilya@denmark:~$ g++ -O0 test.cc && ./a.out
Total time: 18.64 sec
Iterations: 500000000 it
Time per iteration: 3.728e-05 msec
Rate: 2.6824e+07 it/sec
Total time: 16.89 sec
Iterations: 500000000 it
Time per iteration: 3.378e-05 msec
Rate: 2.96033e+07 it/sec

I checked the same code with callgrind and from call graph it looks like strchr() call was inlined while string::find() wasn't. It could be the reason for the difference in the performance. Maybe compiler is even smarter and optimized whole cycle with strchr() out. I'm not sure that the benchmark is completly fair. Anyway one thing is certain: I'll should try to rewrite my URL parser using strchr() and see if the real code is faster.

Syndicated 2007-12-06 13:06:00 from Ilya Martynov's blog

Beyound XSS and SQL injections

What is common about HTML, XML and CSV files, SQL and LDAP queries, filenames and shell commands? All these things are based on text which is often generated by programs. And one commonly observed flaw in such programs is encoding rules are not being followed. These days many developers are aware about SQL injection and XSS problems as many books, online tutorials, blogs, coding standards, etc speak about them. Yet I'm not sure there is enough education so that developers use correct methods to protect their code from these problems. And besides this there is a lack of awareness that it is not just SQL and HTML. Definitely developers should think more broadly: if you generate programmatically any kind of text you must think about proper encoding of all data used in the generated text.

Talking about correct methods to secure code from text encoding related problems one my pet peeve is when people try to strip input data when they really should be thinking about protecting output. Nitesh Dhanjani covers this really well in his blog "Repeat After Me: Lack of Output Encoding Causes XSS Vulnerabilities". Quote:
The most common mistake committed by developers (and many security experts, I might add) is to treat XSS as an input validation problem. Therefore, I frequently come across situations where developers fix XSS problems by attempting to filter out meta-characters (<, >, /, “, ‘, etc). At times, if an exhaustive list of meta-characters is used, it does solve the problem, but it makes the application less friendly to the end user – a large set of characters are deemed forbidden. The correct approach to solving XSS problems is to ensure that every user supplied parameter is HTML Output Encoded
A good example of wrong approach is PHP's invention called magic quotes. I have mixed feelings about this thing. On one hand it was probably a good thing because so many web based software is developed by dilettantes so overall we are living in a slightly better world as magic quotes do somewhat limit damage from bad code. On the other hand it teaches bad habits while not fixing all problems in bad code. Also it causes everybody else to suffer. Good news is that they are getting rid of this abomination in PHP6.

Now let's take a look for some examples how not to generate text which I saw in real life. I'll skip HTML and SQL as this is well covered elsewhere and I'll take a look on other things I mentioned in the beginning of this article.

XML files: bad code which generates XML often shares similar problems as bad code which generates HTML - after all these two are closely related. But as XML is a more generic tool it is used in many domains other then web development where developers are not "blessed" with knowledge of XSS like problems. Moreover I noticed even web developers for some reason often consider XML to be something very different then HTML and suddenly forget they have to escape data. I'm especially amused when that many people are not aware that you cannot put arbitrary binary data in XML. You have to either encode it into text (base64 encoding is quite popular for this) or put it outside of the XML document.

CSV files: this format is still quite popular for exchange of tabular data between programs. Guess what? I've seen so many naive CSV producers and parsers that ignore reserved characters and which break later when these programs get real data. No, to write CSV file you cannot just do
print join ",", @columns
What if one of columns contains say "," (comma)?

LDAP queries: being text based query language it is a target of very similar problems as SQL. But while many developers are aware of SQL injection problem, not many are aware that you have exactly the same problem with LDAP queries too. Also it doesn't help that while nearly all SQL libraries provide tools to escape data in SQL queries it doesn't always seem to be the case for LDAP libraries. For example: PHP's LDAP extension - there is no API to escape data at all.

Using shell to execute commands: if you are running a command using system() in C, Perl, PHP or any other language and you are constructing the command from your data you again should treat this as a problem of proper encoding. The example below is from mozilla's source code
sprintf(cmd, "cp %s %s", orig_filename, dest_filename);
system(cmd);
Guess what happens if any of these filenames were not escaped for characters which are special for shell?

While I'm at this I'd mention that it is probably a good idea to avoid APIs which use shell to execute commands at all. Simply because shell programming is too hard to get right.

What would help a lot if tools would support developers better when writing correct code which deals with text based APIs. Sometimes it is just lack of documentation on encoding rules. For example a month ago I was learning Facebook APIs. One of the provided APIs is API to execute so called FQL queries. This is an SQL like query language and naturally I'd expect FQL injections to be covered in documentation. They don't, it is not even documented how to escape string data in FQL queries! I played with different queries in FQL console and it seems like standard SQL-like method (i.e. using "\" (backslash)) does work as an escape character in strings but why do I have to find this on my own? It is also shame when libraries built around text APIs do not provides means to properly encode data for used text formats. I mentioned one such example above: PHP's LDAP extension provides no functions to escape data for LDAP queries. How hard is it to add this? If you are creating text based APIs or libraries around such APIs it is your duty to help developers who will be using them. So do document encoding rules and do provide tools to automatically encode data!

Syndicated 2007-09-21 22:55:00 from Ilya Martynov's blog

Perl as replacement for shell scripting (Part I)

By shell scripting I mean bash as it is what most (all?) Linux distributions use. Bash can be used as a quite capable programming language. Bash allows programmer to build rather complex scripts by using other programs as building blocks. System comes with a number of such building blocks: find, grep, sed, awk and many others and unsurprisingly there is a lot you can do with them. But it is often a challenge to write robust shell scripts which work or at least fail gracefully for any kind of input. The main reason is that historically shell scripts could use one only data type - string*. Those building blocks, external programs you use in shell scripts have very restricted interface: there are program arguments which are strings, stream of strings as input, stream of strings as output and exit code.

Even a simple concept like a list have to be emulated. For example a list of file names often is passed as a string which contains these file names separated by whitespace. But what if one of these file names contains whitespace? You get a problem. To fix it you need to escape whitespace characters in the filename. And it is rather easy to miss places where you have to do escaping. A bit convolved example:

rm `ls`
This would delete all files in the current directory .. unless they have whitespace characters in their names. There are many similar cases where an unwary programmer can make a mistake in his(her) shell script. Passing data from one process to another often requires a lot of care and the simplest code is often wrong. Another problem is that you are very limited in how you can handle errors in shell scripts - you only have process's exit code to tell you if it finished successfully. And usually it is just a boolean value saying you if there was any error or not. Quote from the linked document:
However, many scripts use an exit 1 as a general bailout upon error. Since exit code 1 signifies so many possible errors, this probably would not be helpful in debugging.
If say mkdir fails your script cannot easily tell if it is because another directory with the same name already exists or you just don't have permissions for this operation.

So any solutions to this problem? As for myself any moment I see my shell script getting longer then three lines of code I rewrite whole thing into Perl. In Perl you don't need to use external programs as much as often as you need in bash. Therefore you are not limited to their restrictive interfaces of them (remember, only strings and exit codes for input and output); native Perl APIs can be much more expressive when they need to.

There is a price though. Perl code is not always as compact as similar shell code for some scripting tasks. This is because the shell scripting is optimized very well to handle interaction of processes and Perl is not as much. It is worth to mention that many things which come for granted in the shell scripting often require you using Perl modules including non standard CPAN Perl modules. It is not problem as such except that not all Perl programmers know where to look for things if they are not covered by perlfunc. This mainly a concern for newbie Perl programmers but it is still a real problem. Also using CPAN modules is not always an option.

Of course in your Perl program you can fail back to using same external programs you would use in a shell script but then you lose advantages of Perl over shell scripting. So .. don't do this if possible. As interesting example of this principle: Perl before version 5.6.0 would fail back to shell to execute operation glob. That was causing various problems for Perl developers: for example I saw Perl programs using glob to fail when run on one tightly secured web hosting server because binary Perl was calling was simply removed from the server for security reasons. In later versions of Perl the implementation of glob was changed: it is implemented purely in Perl now and doesn't use external programs.

To be continued in Part II: mapping between common shell operations and corresponding Perl modules.


[*] New versions of bash support arrays. I'd argue that usefulness of arrays in bash is limited as programs you call from shell scripts cannot use them to pass output data. You are still limited to string streams and exit codes. Not to mention that this is not very portable across different systems.

Syndicated 2007-09-06 14:33:00 from Ilya Martynov's blog

23 Aug 2007 (updated 24 Aug 2007 at 10:21 UTC) »

libxml++ vs xerces C++


When I was reading "API: Design Matters" I recalled one example of good API vs bad API. Actually my example is more about good API documentation vs bad API documentation but I suspect there is a correlation between these two things. It is definitely hard to write good documentation if your API sucks.

So my story is that I had a task to read XML data in C++ application. XML data was small and performance of this part of the application was not critical so it looked like the simplest way to read this data was to load DOM tree for XML document and just use DOM API and maybe couple simple XPath queries. It was the first time I needed to do this in C++; I had no previous experience with any XML C++ libraries. So, I do google search (or maybe it was apt-cache search - I don't remember) and the first thing I find is xerces C++. Quote from project's website:
Xerces-C++ makes it easy to give your application the ability to read and write XML data.
Sounds good, just what I need. So I dig documentation and find it to be completely unhelpful as it is just Doxygen autogenerated undocumentation. Fine, I can read code, let's check sample code then. I open sample code and I find that the shortest example how to parse XML into DOM tree and how to access data in the tree (DOMCount) consists of two files which are more then 600 lines long in total. Huh? I don't want to read 15 pages of code just to learn how to do two simple actions: parse XML into DOM and get data from DOM. Other examples are even more bad. Several files, several classes just to read and print freaking XML (DOMPrint). You've got to be kidding me. It cannot be that hard.

I don't really want to waste hours to learn API I'm unlikely to use ever again. After all I don't write much C++ code and I definitely don't write much C++ code that needs XML. So time to search further. Next hit is libxml++. It is C++ wrapper over popular C XML library libxml. This time there is actually some documentation that does try to explain how to use the library. And this documentation contains an example which while being just about 150 lines manages to demonstrate most of library's DOM API.

End result: I finish my code to read my XML data in next 30 minutes using libxml++. It is simple, short and it works.

So what's wrong with xerces C++? There is no introduction level documentation at all. Examples look too complex for the problem they are supposed to show solution for. And the reason for this is that API is just bad: it requires writing unnecessary complex client code.

Update: boris corrected me about lack of introduction level documentation in a comment to this blog post. Turned out I missed it. As a weak excuse I'll blame bad navigation on the project's site :)

Syndicated 2007-08-23 20:54:00 (Updated 2007-08-24 10:00:03) from Ilya Martynov

23 Aug 2007 (updated 23 Aug 2007 at 21:10 UTC) »

4 silly mistakes in use of MySQL indexes


1. Not learning how to use EXPLAIN SELECT

I'm really surprised how many developers who use MySQL all the time and who do not know or understand how to use EXPLAIN SELECT. I've seen several times developers proposing serious architectural changes to their code to minimize, partition or cache data in their database when the actual solution was to spend 30 minutes thinking over result of EXPLAIN SELECT and adding or changing couple indexes.

2. Wasting space with redundant indexes

If you have multicolumn index it means you don't need a separate index which is subset of the first index. It is easier to explain with an example:
CREATE TABLE table1 (
col1 INT,
col2 INT,
PRIMARY (col1, col2),
KEY (col1)
);
Index on col1 is redundant as any search on col1 can use primary index. This just wastes disk space and might make some queries which change this table a bit slower.

There is one but! See below..

3. Incorrect order of columns in index

Order of columns in multicolumn index is important. From MySQL documentation:
MySQL cannot use an index if the columns do not form a leftmost prefix of the index.
Example:
CREATE TABLE table2 (
id INT PRIMARY,
col1 INT,
col2 INT,
col3 INT,
KEY (col1, col2)
);
MySQL wont use any indexes for query like
SELECT * FROM table2 WHERE col2=123
EXPLAIN SELECT shows this instantly. If you want to run this query faster either change order of columns in the index or add another one.

4. Not using multicolumn indexes when you need to

MySQL can use only one index per table in a time so if you query by several columns in the table you may need to add multicolumn index. Example:
CREATE TABLE table3 (
id INT PRIMARY,
col1 INT,
col2 INT,
col3 INT,
KEY (col1)
);
Query like
SELECT * FROM table2 WHERE col1=123 AND col2=456
would use the index on col1 to reduce number of rows to check but MySQL can do much better if you add multicolumn index which covers both col1 and col2. The effect of adding such index is very easy to see with EXPLAIN SELECT.

Syndicated 2007-08-16 12:22:00 (Updated 2007-08-23 21:02:00) from Ilya Martynov

volatile and threading


Until recently I hadn't much experience writing multi-threading programs in C++ so when I tried to I found that I'm really confused how multi-threading programs mix with volatile variables. So I did a little research and quick summary is: this topic is confusing. It looks like if you put locks around global variables shared between threads you shouldn't care about volatile flag. Definitely under POSIX threads and most likely when using other threading libraries as well. If you don't and rely on atomic operations it seems that you have to use volatile flag for shared global variables but concerning portability it is a grey area.

Longer story is below:

Suppose we have a piece of code which waits for a certain external condition to happen. The code could look like
bool gEvent = false;

void waitLoop() {
while (!gEvent) {
sleep(1);
}
...
}
Let's assume that this is a single threaded program and the external condition we are waiting for is a Unix signal. The signal handler is very simple - it simply sets gEvent to true:
void wakeUp() {
gEvent = true;
}
The problem with the code above is that compiler would optimize out check of the condition inside waitLoop() incorrectly assuming from local analysis of the code that gEvent never changes. The fix is to declare gEvent with volatile modifier which basically tells compiler that the variable can be changed at any time and that is unsafe to perform any optimization based on the analysis of local code:
volatile bool gEvent = false;
Let's take another example. The code is same but this time it is a mutli-threaded program where one thread waits for another. So waitLoop() runs inside one thread and wakeUp() eventually called from another. Is the code still correct? Probably yes if we keep volatile flag and if operations which read or write gEvent variable can be considered as atomic. The later assumptions seems to be correct for most (all?) platforms.

But what if we cannot treat operations which read or write gEvent variable as atomic? For example it might be an instance of a more complex type; for example an instance of class which contains other information then just a information whenever event have happened or not:
struct EventInfo {
EventInfo(bool happened = false, const string& source = "")
: fHappened(happened), fSource(source)
{}
bool fHappened;
string fSource;
}

volatile EventInfo gEventInfo;

void waitLoop() {
while (!fEventInfo.fHappened) {
sleep(1);
}
const string& eventSource = fEventInfo.fSource;
...
}

void wakeUp() {
gEventInfo = EventInfo(true, "wakeUp");
}
This code is still ok for single-threaded program where wakeUp() is a signal handler but is unsafe for multi-threaded program where wakeUp() runs in a separate thread as operations on gEventInfo cannot be treated as atomic anymore.

So how do we fix it? We should surround places where code reads or writes gEventInfo with locks to make sure only one thread accesses gEventInfo at a time. I'll use boost thread library in the example.
boost::mutex gMutex;

void waitLoop() {
string eventSource;

for (bool eventHappened = false; !eventHappened; ) {
{
boost::mutex::scoped_lock lock(gMutex);
eventHappened = fEventInfo.fHappened;
eventSource = fEventInfo.fSource;
}
sleep(1);
}
...
}

void wakeUp() {
boost::mutex::scoped_lock lock(gMutex);

gEventInfo = EventInfo(true, "wakeUp");
}
Comparing this code with earlier examples it looks like we still need to declare gEventInfo variable as volatile but it turns out we don't really need to. Quote from Thread Cannot be Implemented as a Library [PDF]:
In practice, C and C++ implementations that support
Pthreads generally proceed as follows:
  1. Functions such as pthread_mutex_lock() that are guaranteed by the standard to “synchronize memory” include hardware instructions (“memory barriers”) that prevent hardware reordering of memory operations around the call.
  2. To prevent the compiler from moving memory operations around calls to functions such as pthread_mutex_lock(), they are essentially treated as calls to opaque functions, about which the compiler has no information. The compiler effectively assumes that pthread_mutex_lock() may read or write any global variable. Thus a memory reference cannot simply be moved across the call. This approach also ensures that transitive calls, e.g. a call to a function f() which then calls pthread_mutex_lock(), are handled in the same way more or less appropriately, i.e. memory operations are not moved across the call to f() either, whether or not the entire user program is being analyzed at once.
So at least if you using POSIX threads (boost::threads under Linux uses them) your code is probably safe without use of volatile as long as you use locks around global variables shared between several threads. Good question whenever this example code is portable to other platforms; after all boost::threads supports threading libraries other then POSIX which may have other rules for mutexes and locks. I haven't researched this yet as for now I don't really care about other platforms.

Some interesting links on this topic:
  • A Memory model for C++: FAQ - mentions shortly reasons why volatile keyword is insufficient to ensure synchronization between threads and has links on papers for further reading.
  • http://www.artima.com/cppsource/threads_meeting.html - Not much to read there but I love this quote: "Not all the dragons were so easily defeated, unfortunately. Among the issues guaranteed to waste at least 20 minutes of group time with little or nothing to show ... What does volatile mean?" (this in context of multi-threaded programs). If C++ experts cannot agree on this ...
  • Another person gets confused over use of volatile and threads. Interesting discussion on comp.programming.threads.

Syndicated 2007-07-31 14:58:00 (Updated 2007-08-12 00:44:39) from Ilya Martynov

boost::thread and boost::mutex tutorial

For most Boost's libraries its documentation requires you to read everything from the start to the end before you can write any code. Compare that with most of CPAN modules where usually you can start using CPAN module after quickly scanning synopsis and maybe description parts of it's POD documentation. POD documentation as a rule has good examples right on top of the page. Boost's documentation usually doesn't.

So I was looking for basic usage examples for boost::thread and boost::mutex classes and initially I couldn't find any because I was using wrong search keywords. In the end I figured out how to use boost::thread and boost::mutex classes in my application hard way by reading Boost documentation without relying on any examples. But afterwards I did find a very good article on this topic with many simple examples: The Boost.Threads Library on Dr.Dobb's. So I'm posting this link here for google. It is in top 10 hits for some relevant keywords but it is not for others (for example for boost thread mutex tutorial) and this is why I missed it initially. If my blog post helps any Boost.Threads newbie to get started then I would consider time spent writing this post to be not wasted.

Syndicated 2007-07-25 16:35:00 (Updated 2007-08-02 20:49:34) from Ilya Martynov

Starting a new blog

I used to have a blog at use.perl.org but I just was too lazy to write often. One problem it seems you need a certain discipline to keep doing that. Also I just didn't like this site blogging engine. It just looked too simple and offered little control.

At the certain point I tried to switch over a blog on my personal sitebut instead of actually blogging I got carried away by designing a "perfect" system for my blog. I spend hours evaluating different software for my blog and I had very exotic requirements like being able to use SCM software to store my posts. That implied I need a blog software which uses raw files to store posts. I ended up hacking something monstrous what was a combination of Blosxom, darcs and make. And it wasn't that convenient to use at all either. In the end I probably spent much more time setting up all this then actually blogging.

So now I want to start from the scratch: pick some blogging engine which doesn't get into a way and discipline myself to actually write periodically. From my experience learning new programming languages you learn much faster when you have an actual project you are trying to implement in the new language. In a similar venue I'd expect it would be much easier to find new topics for my blog each day if I have a certain new fun project on my mind. And this new project is going to be teaching myself OCaml. Let's see how it goes.

Syndicated 2007-07-18 11:46:00 (Updated 2007-07-18 12:19:25) from Ilya Martynov

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!