xinit servers. dont' buy from xinit.com unless they get their act together! here's why
couple of things: i promised i'd find out about the name of that thing for semantic web: it's called the zachman framework.
the ayurvedic scriptures are a simple and fundamental understanding and expression of quantum mechanics: they outline simple things like the principle of observer, observed and the process of observation which, in language (simple sentence construction), is "subject", "object" and "predicate". the cat (subject) sat (predicate) on the mat (object).
it goes further - a hell of a lot further - but the zachman framework and this ridiculously-named "web 2.0" rubbish are the beginnings of an information era, where the ancient ayurvedic scriptures are once again coming round / being reinvented / being rediscovered.
the internet is pushing knowledge boundaries and tools to contain and structure knowledge and information. google is a good example of that - but google is beginning to creak at the edges of its success, and is being constrained by the limitations of the framework in which it is forced to work: Cancerism (more commonly known as capitalism).
anyway. the who what why when how of the zachman framework maps roughly onto concepts like subject, object, predicate, etc. which are also part of the ancient ayurvedic scriptures which are also part of quantum mechanics.
so it's not rocket science.
pphaneuf - you're looking for an atomic operation to communicate information. the only atomic operation in POSIX is the file "move" operation.
i believe that the operation you are looking for in a kernel, however, is message passing. it's a fundamental operation that doesn't exist in the linux kernel, because the linux kernel developers are too stupid to appreciate its benefit, despite tannenbaum telling people for what... thirty years, now?
what particularly pisses me off about this one is that the work has been _done_ already - by the university of karlsruhe and the university of southern australia.
they keep their work up-to-date with the latest linux offering.
all that's needed is for their work to be adopted into the linux kernel as a compile-time option.
and, the thing is: it would dramatically influence - for the better - the direction and development of the linux kernel.
but, because of linus' pig-headed lack of intelligence and lack of desire to learn or compromise, we have to wait until a bus runs him over before anything can be done.
yes, pierre: your idea has great merit, and it is the solution that gets used often. particularly because you don't have to have threads: you can use processes, you can use a single process and implement a state machine to subdivide the work, it's all the same.
other than, by using threads, then every single libc function call has to now call locking around data structures, on your behalf, which, if you used _processes_, you would end up implementing (by hand) a much _better_ and more efficient use of shared memory for intercommunication than the extremely coarse-grained (but hidden) use of shared memory in libc when threads are used.
and, other than, a state machine of course is a bit of a pain as you have to subdivide the work yourself, manually, but, in some instances, a state machine (simplest way: "callbacks") is the only way.
you only have to look at the mess that is the asyncdns library to realise _that_ one.
company: i've been trying to drum this in to free software people for several years now - like, about... five or six.
a free software project, with free software source code, a nice free software infrastructure, a nice free software source repository, is absolutely xxxxing useless if you don't know what you are doing.
we're going _well_ beyond the realm where "show me the code" actually means anything, which is why it pisses me off so much.
your explanation - which points out the inadequacies of specifications not covering everything that's needed: information is missing.
and you need time times intelligence times information, in order to have an effective implementation and also to be able to _improve_ an implementation.
(i'm sure that there's a quantum mechanics equation for that, somewhere).
and, whilst we have lots of _rights_ to implement (free software licenses etc. which protect the implementation) we _don't_ necessarily have lots of information or intelligence or time...
no, any old machine with 128mb of RAM _won't_ do. the startup time on the AMD Geode 500mhz GX and 600mhz LX CPUs is 11 seconds because the memory is 64-bit DDR 300 and 400 mhz respectively.
compare that to older 133mhz SDRAM of "any old machine" and it's 3 times faster. i have a 500mhz Via ETEN cpu which uses the older 133mhz SDRAM and it runs like a DOG.
you almost don't need any L2 cache, and if you used that blindingly quick DDR2 memory that was the same speed as the CPU, then actually you would have a faster processor if you didn't have ANY cache.
there is a well-known (except i can't remember the name!) semantic classification system which goes on "when what why who where"?
then you provide hierarchical views, provide tags and tag groups.
then you apply intelligence (either through bayesian or through many monkeys) tagging.
_then_ you qualify the tags themselves (!) and make _those_ simply part of the information infrastructure.
so, one of the sources is "tags from bayesian filtering" which were added today, for the purpose of finding some spam... that gives you the "when what where who why".
i'll try to find the name of the classification system.
ok well at least i found this:
The moral of this story of the power and pitfalls of classification trees is that classification trees are only as good as the choice of analysis option used to produce them. For finding models that predict well, there is no substitute for a thorough understanding of the nature of the relationships between the predictor and dependent variables.
a response for the nice person who commented about my koolu quip, from brian, the koolu embedded guru:
OLPC has a custom chip to do panel display, and camera interface.
Conclusion: Very similar hardware, different form factor.
does that help?
lisp, python, and quantum mechanics
it occurred to me recently that the reason why lisp and python are brilliant is because they fundamentally reflect in the language constructs the principles of quantum mechanics.
and that's just the _base_ level: i haven't even got onto where classes come in to express encapsulation of information and its association - the reflection of observer, observed and process of observation.
so, if you always felt yourself wondering why python and lisp are cool and can let you do so much, that's why.
now, if you _fully_ understand quantum mechanics (which i am only just beginning to grasp properly) then you should explain it to guido and the other python developers and get them to reverse the decision not to put lambda and friends into python 3.0!
because, it would be a serious mistake to leave them out!
this syndication feature: it's starting to make advogato _really_ interesting. actually - fascinating.
oubiwann, your wisdom does you credit.
the motherboard in the OLPC computer is available NOW - not "maybe in the future" - in the koolu.com computer that someone noticed a couple of weeks ago in their diary entry here on advogato.
as it's the OLPC motherboard then well DUH it runs the OLPC software.
titus: for goodness' sake advise them to think creatively about their programming.
if they're doing applications, show them pyjamas: i think that's absolutely essential.
even if you don't know anything about it, at least show them the samples on the pyjamas.pyworks.org site.
point them at my web site, which comes with the source code, and squish my web site down to 800, then 640, then 300 pixels wide, on the browser.
and remember to mention that because pyjamas is version 0.1, the auto-resize doesn't work yet :)
show them SQLobject and SQLbuilder (part of turbogears) and Formencode (part of turbogears) and the _very_ simple htmltmpl.sourceforg.net (not part of turbogears but it damn well should be).
explain that htmltmpl has not had _any_ "maintenance" on it - because basically, it is complete! it does what is needed, and if you think anything more is needed, then you are programming / designing the web site completely wrong.
but - above all, emphasise creativity as the absolute fundamental and overarching priority which they should be focussing on. they should be _good programmers_ who _happen_ to be using python.
make sure every module, class and function is either obvious (from its name or from its very few lines) or is documented, and make DAMN sure that everything has test procedures.
someone on here was raving about a new python test environment, which automatically hunted through code looking for stuff that LOOKED like a test - can't remember what it's called.
emphasise that the more testing you can do, the faster you will develop the program.
so, it doesn't matter if you write test procedures: that's good, because those will do testing.
it doesn't matter if you are a fast typer and have some scripts which do your install automatically: the sooner it's installed, the quicker you can do testing.
if you have code coverage techniques (saw one on here last week - looked great) - then great: that's testing.
the faster you can do testing - of _any_ kind - users, test suites, your own development cycle, whatever it is - the faster you will become confident that your code does what you need it to do.
the NSA and GCHQ have a way of thinking about things.
they don't care too much if something is broken, as long as they can _prove_ that it's broken. what they DO care about is if they CAN'T prove one way or the other if it's broken.
so, for the poor NSA and GCHQ, windows is _totally_ out of the question. outright. flat-out. banned. they REFUSE to use it, and will NOT allow a windows computer on their premises. AT ALL.
why? because it's 60 MILLION lines of utter unprovable shit. they can't tell where it _isn't_ secure.
tell your 20 scientists these things and they will go 'hmmmm...' :)
i finally implemented an email button on my web site after what... three to four years?
it's a python JSON service (http://lkcl.net/site_code/ - see json_service directory)
unfortunately, mod_python won't let me import smtplib for some reason, so i had to use popen2 on /usr/bin/mailx. oops.
you'll need the slightly modified version of jsonrpc (also in the json_service directory). you'll need to put json_service/email.py into a services subdirectory (e.g. /var/www/services). you'll need to create /var/cache/mailsender and chown www-data:www-data it. and you'll need this in apache config:
<Directory /var/www/services > AddHandler mod_python py PythonHandler jsonrpc.apacheServiceHandler PythonPath "sys.path+['/usr/share/python-support/python-simplejson/simplejson/']" </Directory>
hw6915 suspend/resume - might be fixed...
arg arg arg a post by paul psokolovsky on email@example.com describes a horror-story debugging session in suspend/resume, where, it turned out, he hadn't converted _one_ device driver for the h4000 from a legacy struct device to the more up-to-date struct platform_device.
apparently you can't mix-and-match both in your hardware: you have to all struct device or all platform_device.
quick, quick, slow...
things were going _so_ well on the htc sable (ipaq hw6915) and then i ran into suspend/resume hell for over a week, went to holland for another week, and i think i left the charger there, so i can't carry on until i find it.
in the mean-time, i've been playing with other devices: sound on the blueangel, which is hell, and the s3c2442-based htc hermes, which is hell. all in all, i don't feel like i've actually achieved anything, for over two weeks. and it's pissing me off.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!