25 Nov 2006 mentifex   » (Master)

Artificial Intelligence Troubleshooting and Robotic Psychosurgery

The Instantiate module is so simple and straightforward that, instead of malfunctioning itself, it is more likely to develop problems caused by other modules such as newConcept and oldConcept, which prepare the associative-tag data that will be assigned during the operation of the Instantiate module. Nevertheless, the overall functionality of an AI Mind may develop bugs so mysteriously hidden and so difficult to troubleshoot that a savvy Mind whisperer or AI coder extraordinaire will know from expert experience that it pays to troubleshoot the Instantiate module, which is probably not malfunctioning, in order to track down elusive bugs which drive bourgeois, clock-watching AI code maintainers to distraction and despair. When management finally calls in the True AI expert, the schadenfreudig hired hands will crowd around the high-security superuser debugging console and hope to watch the AI coding legend fail as miserably as they have.

Years later the stories will still be told about how the obviously inept and overpaid AI guru wasted everybody's time by troubleshooting the [fail-safe] Instantiate module and somehow miraculously found the bug that nobody else could even describe let alone pinpoint, thus fixing the unfixable and saving the mission-critical stream of AI consciousness that was threatening mayhem if not TEOTWAWKI - - the end of the world as we know it.

A guruvy way to troubleshoot Instantiate is to temporarily set the AI software to come up already running in the "Diagnostic" troubleshooting mode. In either Forth or JavaScript, the same technique that starts the AI Mind in tutorial mode may be used to start the AI in diagnostic mode. In JavaScript it may be the judicious use f "CHECKED" in the control panel code, and in Forth it may be the setting of a numeric mode-variable. In Forth one may halt the AI and run the ".psi" diagnostic tool.

The overall Mind functionality bugs that can be tracked down by troubleshooting Instantiate can be maddeningly difficult to diagnose and tend to have an etiology rooted either in the [spreading activation] subsystem or in the associative-tag subsytem that comes to a head in the Instantiate mind-module. AI code maintainers may be so accustomed to looking for thought-bugs in the activation subsystem that they forget to consider the associative subsystem on which all the activation-levels are riding. It does no good to have perfectly tuned conceptual activation parameters if the associative routing mechanisms are out of whack.

A systemic Mind bug is evident when the AI fails to maintain a meandering chain of thought or outputs spurious associations instead of the true and logical associations that would reflect accurately the knowledge base (KB) accumulating in the AI Mind.

One troubleshoots the associative Instantiate subsytem by examining the contents of the conceptual Psi array as shown in diagnostic mode. For every thought generated by the AI, there must be a record of its mental construction archived in the panel of associative tags for each constituent concept. A proficient AI coder is able to examine the associative-tag engrams and reconstruct the thought in natual human language. An even savvier AI guru will check not only the immediate area of a spurious thought for clues about what went wrong, but will identify the designations of the concepts involved and will search out nearby instances of the same concepts to make sure that the appropriate associative tags are being assigned properly by the Instantiate module in conjunction with other mind-modules that prepare the tags for assignment.

It is definitely not the case that an AI Mind, once it is functioning properly, will never again suffer systemic Mind bugs. Adding any new functionality to a primitive AI Mind potentially upsets the system as a whole and permits the [emergence] of either conceptual activation bugs or associative mindgrid bugs. Porting an AI Mind from one programming language to another is almost sure to cause systemic Mind bugs. Installing an AI Mind in a robot embodiment may engender systemic Mind bugs. Comes a Singularity, nothing can be done to keep a self-modifying AI codebase from suicidal extinction by genetic trial and error.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!