Older blog entries for mentifex (starting at number 25)

Mind.Forth Programming Journal (MFPJ) Mon.27.FEB.2006

Yeserday we reasoned:
> Perhaps one way to begin any session of troubleshooting is to type in
> cats eat fish [ESCape] -- instead of [RETURN]. In that way, it may be
> possible to use the .psi command to see exactly what activations have
> been created for the first three concepts -- and then to go from there.

Just now we followed the above instructions and .psi told us:

164 : 72 4 0 0 5 73 72 to CATS
168 : 73 3 72 72 8 74 73 to EAT
173 : 74 2 73 73 5 0 74 to FISH
time: psi act jux pre pos seq enx

Immediately we ask, why are there such low activations (4, 3, 2)
on the three new concepts for which we typed in words?

Today we realized that we have a chance here to examine our
troubleshooting techniques and to write them up in the #debug
area of the webpage for any pertinent mind-module. We could also
write up a general debugging document for the artificial mind,
but there is a crying need to flesh out the mind-module pages.

So therefore one immediate question is, which module are we
debugging here? Well, where did those rather low (4, 3, 2)
activations come from?

Let's use an already known word with "robots eat fish" and see
what activations we get. Aha, we get one higher activation amid:

166 : 39 23 0 0 5 72 39 to ROBOTS
170 : 72 3 39 39 8 73 72 to EAT
175 : 73 2 72 72 5 0 73 to FISH
time: psi act jux pre pos seq enx

So apparently our AI is setting really low activations for
new words coming in. The Moving Wave Algorithm of
C:\Forums\E-Mail\AGI_902.txt (stored locally) and of
http://www.mail-archive.com/agi@v2.listbox.com/msg02527.html
perhaps mandates such a state of affairs, but the differential
now looks far too severe in the light of the more recent
developments of the psiDamp module and the psiDecay module.
We now think that residual, subconscious activations have to
be set at the top of a middle activation-tier by psiDamp, so
that psiDecay may let the activations slowly dwindle away.

At some point in our #debug material we should perhaps suggest to
AI coders that they comment out any automatic asking of questions,
so that the natural interplay of old and new concepts may be
observed without interference in the setting of normal activations.

Now it seems that we have found the problem in the HCI module:

  \ 32 uract !  \ 26jul2002 Let PARSER decrement input "act".
  \ 32 uract !  \ Depressing new concepts to boost old concepts.
     5 uract !  \ Allow KB input but no influence on chain of thought.
On the contrary, our new psiDamp/ psiDecay work means that we do
indeed want momentarily recent inputs to have a subconscious
influence on chains of thought. We want ideas in the subconscious
to lurk just below the surface and to sink slowly into oblivion.

Our September 2005 work on the Moving Wave and our mon23jan2006
work on two-tiered conceptual activation have given us a great sense of
confidence as we approach the first really "working" model of our AI
and our first official file-release of Mind.Forth from SourceForge.
We now see more clearly the space in which we need to work, and we
are tantalized by the psiDamp/ psiDecay implications for consciousness.

Now we try setting uract to 31 in HCI -- for the top of the second tier.
49-64 can be a buffer for increments of activation during thinking.
32-48 and above can be the consciousness area of the Moving Wave.
17-31 and below can be the subconscious area of psiDamp and psiDecay.

The idea is that we can have psiDamp knock a concept down to about 31.
Then psiDecay lets the conceptual activation slowly sink further, but
the subconscious concepts are still available for inclusion in a thought.

With uract set at 31, now we get:
164 : 72 30 0 0 5 73 72 to CATS
168 : 73 29 72 72 8 74 73 to EAT
173 : 74 28 73 73 5 0 74 to FISH
time: psi act jux pre pos seq enx

and

166 : 39 49 0 0 5 72 39 to ROBOTS 170 : 72 29 39 39 8 73 72 to EAT 175 : 73 28 72 72 5 0 73 to FISH time: psi act jux pre pos seq enx

Now we should upload the 27feb06C.F Mind.Forth file to
http://mind.sourceforge.net/mind4th.html just to show the flag.

Mind.Forth AI Engine for Robots

http://mind.sourceforge.net/mind4th.html has today been updated with improvements to AI Tutorial mode and with corrections to prevent word-wrap problems.

http://mind.sourceforge.net/seedai.html is "Seed AI" in JavaScript based on the Mind.Forth AI source code, and has not yet been updated to the Mind.Forth level.

Am 28. August 2005, Sonntag in der Eigerwand. On a personal level, I am finding it hard to change my slacker ways. Ich muss verzichten.

Coding "28aug05C.F" in the wee hours today, I started using the psiDamp module to leave a "lump" of activation on direct objects after the thinking of a thought. Then it turned out that the Think module needs to find an active verb, so I started leaving a "lump" of activation on verbs, too. In the Forthmind that I uploaded to the Web today, chains of thought now meander indeed, but they still have the problem of making spurious associations. (Oh, gee, I forgot to reinstate the Ask module.)

I may decide to change the Think module so that it simply looks for an active subject, not an active verb. Then the behavior of the AI Mind may become much more interesting. It might start verbally searching for words, uttering a subject noun and then perhaps failing to find either a verb or a direct object. An incomplete thought might then trigger the asking of a question.

I feel as though I am gradually implementing my recent plan for a baseline, "normal" AI. First, a few days ago, I worked on entering data properly into the knowledge base (KB). Then today I provided for meandering chains of thought. Next I need to make a looping chain of thought. Then I need to integrate the Ask module with the baseline AI normalcy. Then I will be free to integrate module after module with the quasi-bare "Christmas tree" of the baseline AI Mind.

A few minutes ago, an idea came to me that perhaps I should create a private, local registry of "peer-pages" existing simultaneously on Wikipedia, AIWiki and SL4Wiki. In that way, I could fill in blanks and add links at all three venues.

On 25aug2005 I added my most dynamite link ever to the Wikipedia. When the Mentifex-bashers find out about that sheer audacity, it will be open season on hunting for witches. Oops, I almost gave it away. I just need a few months for that embedded link to propagate all over the Web, and then it won't matter any more if the pack of jackals howls at the moon.

Two nights ago I made a spur-of-the-moment comment on Slashdot with the title, Artificial Intelligence Needs Venture Capital and with the following text.

Er, could anybuddy spare a few coins for Open Source Artificial Intelligence?

You don't even need to fund an unknown AI startup. Just hire some hotshot programmers and Steal.This.Idea!

It's all described in the scientific literature of the Association for Computing Machinery (ACM).

Be the first on your block to launch the Hard Takeoff of a Technological Singularity.

Over the next day or two, each of my own links from above received about six hundred hits. Some of the people may even have downloaded and started running the Mind.Forth AI. Some of the people may have noticed that Mind.Forth was updated that very day and again two days later.

Yesterday DisQ on 914pcbots.com was asking if I make random choices of what to debug in my AI program. Things that need debugging may seem to pop up randomly, but there seems to be an overall trend towards better functionality. Before I installed the Mind.Forth diagnostics in 2005, so many invisible problems were secretly hidden in the opacity of the AI source code. Now a little diagnostic sunshine is forcing simple but show-stopper problems out into the open. L'intelligence artificielle est en marche, et rien ne l'arretera.

Mind.Forth Programming Journal (MFPJ) Tues.2.AUG.2005

The artificial Forthmind for robots has long been plagued by a problem of making spurious associations during the generation of ideas in Natural Language Processing (NLP).

NLP generation depends upon spreading activation from concept to concept while the AI follows a chain of thought. A syntactic structure expresses each link in the chain as an English word embedded in the knowledge base (KB) of the AI Mind. To ask the AI a question is to query the KB. Any question put to the AI activates the underlying concepts of each word in the question. A question is like an incomplete idea that becomes whole only with the delivery of missing information held captive in the AI knowledge base. (Pay attention, Google. These ideas pertain to search engines.) NLP questions zero in on target KB data not merely by booling keywords as search terms, Mr. Boole, but by stating in advance the logical associations expected to exist among the results. If a search engine user asks what food a certain animal eats, the answer is not a list of all the animals in the world and of all the foods in the world, but rather a series of facts ordered preferably in a sequence ranging from the most common dietary druthers of the beast down to what it will eat in special circumstances. The user thinks of a search query, and so the search engine ought to *think* of an answer and not just spit out the ten thousand most closely related trivia.
Mind.Forth Programming Journal (MFPJ) Thurs.28.July.2005

SEPARATE ACTIVATION TRACKS FOR PARSING AND NLP GENERATION

There may evolve an AI design principle of having separate activation pathways for parsing and for generation. The impetus for this decision comes partially from the accident of dealing with the uract variable in the HCI module and partially from the need to rely upon strong activations spreading out from subject to verb to object in the NLP generation process.

During the parsing of input, only minor activations need to be assigned so as to maintain a viable thread of thought. As the Cyc-like AI knowledge base (KB) grows, the associations being recorded are far more important than any residual activations clinging briefly to the memory engrams. After all, the generation of a thought will proceed like a lightning bolt of strong activations snaking brilliantly (or GWB stupidly) along unforeseen pathways in the stick-forest of concepts. Make the activations from input-parsing too strong, and they might swamp the efforts of the mind-as-a-whole to think independently.

Now in real time we are inserting a Tutorial message into the Activate module to inform us whenever Activate is called. We need to see whether Activate is even called at all anymore, and, if so, how frequently and perhaps by what other mind-module. It might also be good to declare tutorially what level of activation is being assigned by the Activate module.

Oh, gee. When we start the AI Mind and throw it immediately into Tutorial mode, we see that Activate is called prominently at each step of the NLP generation process. Nothing is strange there, because generation requires a little boost of activation to flush out each succeeding concept in the chain of emerging thought.

Now, when we go into the newConcept module and temporarily comment out the "recon" line so that the AI will not ask us any questions during input, we observe in Tutorial mode that both the latent activations from parsing and the de novo NLP activations are not strong enough to generate an idea. So therefore we go into the Activate mind-nodule and we jack up the bulge variable to a value of 16 just before calling spreadAct, with a reduction back down to a value of 2 just after the call, for safety purposes. Lo and behold, the Think module now easily generates a sentence of thought.
Mind.Forth Programming Journal (MFPJ) - Sat.16.JUL.2005

ATTACKING THE SPURIOUS-ASSOCIATION PROBLEM

It should not matter what inputs we use as a test to debug the problem of spurious associations. It would be nice, however, to find some classic inputs that would highlight any existing problem.

For test input, if we use words that are already in the AI bootstrap vocabulary, then NEWCONCEPT should not get called and the AI should generate a response based solely upon the functions of OLDCONCEPT, ACTIVATE and SPREADACT. However, when we type in
"people see robots"
we get
"PEOPLE SEE YOU"
as a response. Our diagnostic mode reveals to us that the spurious direct-object "YOU" had an activation of 54, while the correct direct-object "ROBOTS" had a high but insufficient activation of only 51.

Uh-oh. We have a vexing enigma of a bug right now. With the test input above, we only get the wrong response in diagnostic mode, not in normal mode. It suggests a Heisenbergian problem where to observe the functionality is to change the functionality.

After much experimentation based on guesswork, it seems that the #56 psi concept of "YOU" is being activated whenever we start the Forthmind and press either the space-bar or the Tab key.

Hmm. The spurious activation of the "YOU" concept has something to do with the bootstrap "ME" concept that went just before it. Oh, well, it's not so much a spurious-association problem as it is a spurious-instantiation problem. In fact, dwelling on the problem leads us to speculate, if not conclude, that the AI is just doing what it is supposed to do after the unterminated input of a SPACE character, which is to instantiate the current psi concept. Therefore the inner-POV concept of "ME" gets instantiated as an external-POV concept of "YOU". After all, the Forthmind is in a mode of accepting external input.

Ideas for a fix come to mind, such as somehow not letting the Tab key lead to a calling of INSTANTIATE.

This session of coding reminds us of how "brittle" the AI software is. We introduce the Tab key functionality, and we break the normal INSTANTIATE functionality.
Virtual Attendance Via Weblog at AAAI-05 in Pittsburgh

Hello from the broadly ridiculed and sneered at mentifex, an independent scholar in AI still plugging away at yours and my favorite intellectual engagement - mindmaking. I submitted the Slashdot article on the founding of Numenta by Jeff Hawkins and I wrote the very alternative AI4U textbook of artificial intelligence. My linguistic theory of mind is the basis of an intelligent architecture that is slowly but surely being implemented in Mind.Forth AI for robots. The Association for Computing Machinery (ACM) has published a 1998 article about Mind.Forth and a 2004 article as a follow-up six years later. Born on July 13th all-too many years ago, I have been working on AI since I was a 19-year-old undergraduate. Watch this space for the approaching completion of the Mind.Forth AI software, and please keep an open mind about independent scholar contributions to AI. -Arthur (mentifex)

Function of EGO module updated in Mind.Forth AI

Mind.Forth artificial intelligence for robots has today been updated with new code that lets the Think module cause the Ego module to be called if the AI Mind has gone flatline brain-dead for an arbitrary number of Alife main- loop cycles. Then the Ego module causes the AI Mind to generate an egotistical thought in order to reestablish a chain of ideas by spreading activation.

Junk DNA

My AI project in Forth on sourceforge is full of a lot of junk DNA. Most of the inactive code is accidental, but some is intentional. For instance, traces of IQ code serve as a stimulus for future AI coders to measure the IQ of machines exhibiting Artificial Intelligence on the way to a technological singularity.

14 Jun 2005 (updated 25 Jun 2005 at 14:14 UTC) »
Whew! Close Call!

SourceForge sent a scary e-mail today, advising me not that I was to be shipped for internment and torture at Guantanamo Bay ("Where Americans flush Qurans down the toilet") Concentration Camp (the "gulag of our day" -- Amnesty International), but something still very bad -- someone had made a request for the takeover of my "mind" project on SourceForge. My first reaction was to think, what Mentifex-basher is behind this evil plot? However, it turned out to be an exciting and legitimate project that inadvertently wanted the "mind" namespace. I had fourteen days to object to the takeover request, and so within about fourteen seconds I proceeded to defend my turf with the following plea, not that the Americans refrain from killing me and packing my body in ice, but that I be allowed to retain my peace of mind.

The "mind" project has actually been very active for the past four years. Successive versions of the JavaScript software have simply been released as interactive HTML webpages. Successive (and quite recent) versions of the non-compiled Mind.Forth software have also been released as HTML pages of the mind project. I am actively working on the first official file release of Mind.Forth for sometime in this current year of 2005. The intended project of the petitioner sounds quite exciting and I wish the petitioner well, but I have elected to opt-out of the takeover because I continue to invest enormous time and energy in the mind project. Thanks to all involved. Sincerely, Arthur T. Murray

16 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!