couple of things: i promised i'd find out about the name of that thing for semantic web: it's called the zachman framework.
the ayurvedic scriptures are a simple and fundamental understanding and expression of quantum mechanics: they outline simple things like the principle of observer, observed and the process of observation which, in language (simple sentence construction), is "subject", "object" and "predicate". the cat (subject) sat (predicate) on the mat (object).
it goes further - a hell of a lot further - but the zachman framework and this ridiculously-named "web 2.0" rubbish are the beginnings of an information era, where the ancient ayurvedic scriptures are once again coming round / being reinvented / being rediscovered.
the internet is pushing knowledge boundaries and tools to contain and structure knowledge and information. google is a good example of that - but google is beginning to creak at the edges of its success, and is being constrained by the limitations of the framework in which it is forced to work: Cancerism (more commonly known as capitalism).
anyway. the who what why when how of the zachman framework maps roughly onto concepts like subject, object, predicate, etc. which are also part of the ancient ayurvedic scriptures which are also part of quantum mechanics.
so it's not rocket science.
pphaneuf - you're looking for an atomic operation to communicate information. the only atomic operation in POSIX is the file "move" operation.
i believe that the operation you are looking for in a kernel, however, is message passing. it's a fundamental operation that doesn't exist in the linux kernel, because the linux kernel developers are too stupid to appreciate its benefit, despite tannenbaum telling people for what... thirty years, now?
what particularly pisses me off about this one is that the work has been _done_ already - by the university of karlsruhe and the university of southern australia.
they keep their work up-to-date with the latest linux offering.
all that's needed is for their work to be adopted into the linux kernel as a compile-time option.
and, the thing is: it would dramatically influence - for the better - the direction and development of the linux kernel.
but, because of linus' pig-headed lack of intelligence and lack of desire to learn or compromise, we have to wait until a bus runs him over before anything can be done.
yes, pierre: your idea has great merit, and it is the solution that gets used often. particularly because you don't have to have threads: you can use processes, you can use a single process and implement a state machine to subdivide the work, it's all the same.
other than, by using threads, then every single libc function call has to now call locking around data structures, on your behalf, which, if you used _processes_, you would end up implementing (by hand) a much _better_ and more efficient use of shared memory for intercommunication than the extremely coarse-grained (but hidden) use of shared memory in libc when threads are used.
and, other than, a state machine of course is a bit of a pain as you have to subdivide the work yourself, manually, but, in some instances, a state machine (simplest way: "callbacks") is the only way.
you only have to look at the mess that is the asyncdns library to realise _that_ one.