Computation and Human Experience

Posted 11 Jun 2000 at 17:10 UTC by raph Share This

Phil Agre recently sent a chapter of his book, Computation and Human Experience to his excellent Red Rock Eater news service. It looks like food for thought.

I was particularly interested in his discussion of the tension between abstraction and implementation. This is a personal theme in my development work, as well. Many computer scientists consider abstraction to the ultimate Good. Indeed, providing a good abstraction is a very powerful tool. However, internally, I've sometimes seen abstraction add complexity and get in the way of the task at hand. Phil's chapter discusses this tension from a very different point of view, but should be quite interesting nonetheless.

Another interesting topic is the way the AI community has maintained their own computing infrastructure, somewhat independent from the mainstream of the computer industry. The chapter gives some of the history and puts this in perspective.

Overall, it looks like a book to be read, savored, pondered, and discussed.


ISBN is 0521386039, posted 12 Jun 2000 at 00:26 UTC by schoen » (Master)

I put that on my want list after I saw it on RRE. It looks very interesting.

For those who want to read the chapter on-line, here it is.

For those who want to buy the book, the ISBN is 0521386039, and you can buy it from Barnes and Noble if you are a software patent fanatic.

Oops, posted 12 Jun 2000 at 01:17 UTC by schoen » (Master)

Sorry for the redundant link.

Computation models -> languages, posted 12 Jun 2000 at 14:46 UTC by pliant » (Master)

I like Philip E. Agree first chapter because it clearly points out that languages design is a clear trade-off between abstraction, freedom and efficiency with short computing history showing that serial machines where most successfull because of the freedom they bought, and procedural languages (trivial abstractions of register based processors) where most successfull because they average give the lowest power waste on serial computers.

In Pliant design page, I state one more assertion about the brain computational model (a very large set of slow processors rather that the small set of fast ones that we find in nowdays computers) which is (not proved at all):
- serial computation tend to be more reliable, so they are the best option for checking a solution. As a exemple, a nodays computer can very easily verify any methematical theorem.
- highly parallel computation (10^10 processors or more) tend to be more efficient for generating potential solutions (what we call 'intuition' and that will be used to formulate the theorem)

So, what I guess is that at some point, we will add very parallel hardware as a new component in our serial computers, and it will be dedicated to generate potencial solutions that the serial part will verify.

Now, if we get back to nowdays computers, since we only have serial computation available, efficiency is a great constaint since, as Philip pointed out, serial computers tend to be slow, so the Pliant assertion is that if we want higher level languages that will enable to write applications with less programmer effort, the translation to the 'register' computational model must not be too bad, and the answer I provided when designing Pliant is that it must not be hard coded in the compiler but rather be extensible through applications. So Pliant uses a dual representation of the program, one 'Lisp' like, and one 'C' like, and the translation mecanism between the two is the key part of Pliant which is easily extensible in applications. In other words, when a Pliant application or library introduces a set of new high level kind of objects, it can tell the computer how these must be handled not only through providing 'functions' or 'methods' that are blind set of instructions, but also through providing new optimization rules.

Abstraction is a good thing. Sometimes, posted 13 Jun 2000 at 14:29 UTC by Uruk » (Apprentice)

Well, it's actually a good thing most of the time. Most people don't realize how much computer science is hidden behind a simple line of perl that looks something like:

$variable =~ s/foo/bar/gi;

Nondeterministic finite automata, the perl parser, converting that into an internal data representation, all kinds of stuff is going on with that type of statement.

Abstraction is a good thing, because you can write ridiculously high-level things like that mentioned above very easily. The bad part is when the implementation of the abstraction is no good. Lots of people complain about the running speed of python, perl, tcl, and java. Some of the performance hit is probably inescapable, but it can at least be minimized. Another problem is that sometimes abstraction trips over itself and introduces a whole different set of concepts that are too low level for comfort. An example of this is the way java allows pass by value and pass by reference, passing things back through the parameter list, differentiation between static methods and class methods. Other languages don't make those distinctions. Sometimes I get the feeling that with java, they tried so hard to abstract some things that they ended up making a new set of low level concepts like the permissions on a class, and so on. Not to say that it's hard, because java is still easier for beginning programmers to learn then, say, C, but the point is that it's harder than it needs to be.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page