Older blog entries for mentifex (starting at number 37)

MileStones


MileStones
#summary Stages of development already achieved and yet to 
be achieved by MindForth 


AiEvolution AiHasBeenSolved RoadMap TimeLine

An artificial general intelligence (AGI) must pass through various stages of development on the way from the start of an AGI project to a fully realized AGI.

== Already Achieved == === QuIckening === === GenerationOfThought === === CognitiveChainReaction ===

=== MeanderingChain of Thought ===

=== SelfReferentialThought ===

== Yet to be Achieved ==

=== EmBodiment in a RoBot ===

=== PortingOfCode ===

=== ConSciousness ===

=== PermanentInstallation ===

=== DiaSpora ===

=== SuperComputer Installation ===

=== MetEmPsychosis ===

=== ProliFeration ===

=== MasPar ===

=== HumanLevel ===

=== RecursiveSelfEnhancement ===

=== SuperIntelligence ===

=== TechnologicalSingularity ===

== OutReach ==

http://agi-roadmap.org/Milestones

== Memespace ==

CognitiveArchitecture DreamTeam HardTakeoff ImMortality IndustrialEspionage JointStewardship LandRush OldestLivingAiMind OpenSource OpenSource ScienceMuseums SeTi TippingPoint UserManual VentureCapital VpAi WikiPedia

Labels: future overview roadmap

20 May 2009 (updated 20 May 2009 at 19:43 UTC) »
MindForth Programming Journal - wed20may2009

1. Wed.20.MAY2009 -- AI MINDS FOR CONSCIOUS ROBOTS

So many robots need to have an AI Mind installed, and since MindForth is tantamount to the VisiCalc of artificial intelligence, that we now rush to add feature after feature to the budding robot AI. Recently we made MindForth able to InStantiate a singular noun upon encountering a plural English noun in the auditory robotic input stream. If you tell the robot AI4U Mind something about birds, it now sets up the singular noun "bird" as a concept. Then we encoded an algorithm of assuming from input of the article "a" that the next noun after "a" is a singular noun. If you say that you wish to manufacture "a conscious robot", the article "a" sets a flag that skips the adjective "conscious" and assigns the singular "num (ber)" to whatever noun (e.g., "robot") comes next. (And with AI4U technology we are indeed helping you to manufacture a conscious robot.) Next we need to ensure that the emergingly conscious AI Mind will use the right form of an English verb when, for example, "it talks" about a singular noun. Simply put, the software "needs" to put "s" on the end of a verb after a third-person singular noun.

2. Wed.20.MAY.2009 -- THIRD-PERSON SINGULAR VERB INFLECTION

The "nphrnum" variable is set in the NounPhrase module and keeps track of whether a noun is singular or plural. The "vpos" variable is set in VerbPhrase and is used in the following SpeechAct code.


pho @ 32 = IF  \ 20may2009 If engram is a blank space...
  vpos @ 1 = IF    \ 20may2009 If a verb is being spoken
    nphrnum @ 1 = IF  \ 20may2009 If subject is singular
      subjpsi @ 50 = NOT IF  \ 20may2009 If subject not "I"
        subjpsi @ 56 = NOT IF  \ 20may2009 If not "YOU" 
talking
          83 pho !  \ 20may2009 Insert inflectional "S" pho.
          1 spacegap !  \ 20may2009 Prepare one space-gap.
          0 vpos !    \ 20may2009 Reset after use
          0 nphrnum !  \ 20may2009 Reset after use.
        THEN  \ 20may2009 End of test to avoid subject "YOU"
      THEN  \ 20may2009 End of test to avoid subject "I"
    THEN  \ 20may2009 End of test for a singular subject
  THEN  \ 20may2009 End of test for a verb being spoken
  pho @ EMIT  ( say or display "pho" )
  1 audstop !  \ A flag to stop SpeechAct after one word
THEN \ 1jan2008 One last call to Audition
35 pov !  ( internal point-of-view ASCII 35 "#" like 
mindgrid )
AudInput    ( 16oct2008 for reentry of a thought back into 
the mind )
audstop @ 1 = IF  \ 20may2009 Adding one pho=32 space bar
  spacegap @ 1 = IF  \ 20may2009 If an "S" has been added...
    32 pho !  \ 20may2009 Carry pho=32 "space" into AudInput
    AudInput  \ 20may2009 For the reentry of one space.
    0 spacegap !  \ 20may2009 Reset spacegap after use.
  THEN   \ 20may2009 End of test for spacegap of one space.
  LEAVE  \ 20may2009 Abandon the looping through auditory 
memory
THEN  \ 1jan2008 Having spoken one word.

The above code not only adds an "S" to a standard English verb being used in the third person singular, but also causes the proper reentry of the inflected verb form back into the AI Mind. Whereas only the stem of the verb is retrieved from auditory memory, after the addition of "S" for inflection during thought, the inflected form of the verb now enters the auditory memory.

http://www.scn.org/~mentifex/mindforth.txt
http://mentifex.virtualentity.com/m4thuser.html

MindForth Programming Journal - mon11may2009

1. Mon.11.MAY2009 -- ATTACKING THE SINGULAR-STEM PROBLEM

Today we would like to work on getting the AI to recognize both singular stems and plural forms of a standard English noun. Perhaps we will start out by trying to see if we can have the AI instantiate nouns ending in "s" while going back and assigning the "audpsi" ultimate tag to the penultimate phoneme which is the end of the singular stem.

Looking at the AudMem code, we realize that we need to get several new influences in there to cause the AudMem module to dip back one unit in time and assign the "audpsi" value to the penultimate phoneme that marks the end of the stem. We want the word to be a noun and to be ending in "s". We might get away with disregarding whether the word is a noun. We could just look for all words ending in "s" and then we could plunk down the "audpsi" ultimate tag not only on the final "s" phoneme but also on the penultimate phoneme. Serendipitously, in this way we would also manage to tag the stem of a third-person singular verb with an audpsi, as in, "He works," where a penultimate audpsi would identify the concept. So we get out our ASCII chart and we see that uppercase "S" is "83" in ASCII. We will test the end of words for an "S" and assign the "audpsi" value to both the final and the penultimate phoneme.

2. Tues.12.MAY.2009 -- SPOOFING THE INPUT STREAM

We will try now to achieve singular-stem recognition from plural nouns not only by putting an audpsi ultimate-tag on the penultimate phoneme, but also by setting the "ctu" continuation flag to zero ("0") in the penultimate position, so that our software will "think" that it has recognized a whole word instead of just a stem inside a noun-plural.

In the AudInput module, we have been putting our new code into the area for internal reentry. Perhaps the new code belongs in the area for external input.

Hey, perhaps all this new code should be in NewConcept, because we are trying to deal with previously unknown noun-stems.

Gee, we might try something really radical. We have created a variable


variable newpsi   ( 12may2009 for singular-nounstem 
assignments )

so that we can be sure to assign "ctu=0" and "audpsi" only to noun-stems of a plural word coming in during user input. It might be quite radical, but useful, to put the "newpsi" value just before all incoming "S" phonemes, not just final, end-of-word "S" phonemes. We would assume that such assignments would not cause any problems for "ctu=1" word-engrams. Then, when the word-engram was finalized, we could go back in and set the "ctu=0" value to permit future recognition of the noun- stem.

3. Wed.13.MAY.2009 -- ALMOST CODE-COMPLETE

The variable "newpsi" obtains no positive (above zero) value until NewConcept is called -- which happens when?

With a new "wordend" variable we finally achieved the basic functionality that we have been seeking for the past three days, but with a few minor glitches. We used the following AudMem code.


    \ 13may2009 In AudMem as called towards end of AudInput:
    pov @ 42 = IF  \ 12may2009 Only during external input
      pho @ 83 = IF  \ 12may2009 If phoneme is "S"


\ CR ." S pho & newpsi = " pho @ . ." " newpsi @ . \ 12may2009 test \ ." time t = " t @ . \ 12may2009 test newpsi @ t @ 1- 5 aud{ ! \ 12may2009 pre-"S" audpsi

wordend @ 1 = IF \ 13may2009 If word has ended CR ." audpsi = " audpsi @ . \ 13may2009 a test. \ audpsi @ 0 = IF \ 13may2009 Change ctu only for new words. 0 t @ 1- 4 aud{ ! \ 13may2009 As if "no continuation". \ THEN \ 13may2009 \ 13may2009 End of test for known word. THEN \ 13may2009 End of test for end of word 0 newpsi ! \ 12may2009 Reset for safety. THEN \ 12may2009 End of test for "S" THEN \ 12may2009 End of test for external input.

In an upcoming other version of MindForth, we need to overcome the minor glitches. One glitch is that the AI is setting "ctu" to zero on both the penultimate and ultimate array-row of a plural word that has just previously been learned. We would prefer that the known plural word only have ctu=0 in the final row. Another glitch is that the new code is working only after a previously unknown verb is used. It should be relatively simple to remove that particular glitch.

4. Thurs.14.MAY.2009 -- STUMPED AND STYMIED

Of the two glitches we need to work on, the more important one, and also probably easier to solve, is the problem of the new code not working with a known verb from the EnBoot English bootstrap.

In the following transcript, the new stem-rec code does not work after we use the English bootstrap verb "know".


Transcript of AI Mind interview at 7 39 3 o'clock on 14 May 
2009.
i know books
 S pho & newpsi = 83   0 time t = 215


Robot: BOOKS WHAT ARE BOOKS

When we use the previously unknown verb "use" in the following transcript, the stem-recognition code works just fine. Why? What is the difference between using an old or a new verb? Here we say "i use books" to the AI Mind.


Transcript of AI Mind interview at 7 40 7 o'clock on 14 May 
2009.
i us
 S pho & newpsi = 83   0 time t = 207 e books
 S pho & newpsi = 83   77 time t = 214
 audpsi = 0


Robot: BOOKS WHAT ARE BOOKS

There must be a hidden influence in either OldConcept, or NewConcept, or both, because one or the other module is invoked for the verb, depending upon whether the verb is "old" or "new."

By means of some diagnostic code in AudMem, we have just learned that the "newpsi" variable has a value of zero after a verb from the English bootstrap.


Transcript of AI Mind interview at 21 8 34 o'clock on 14 
May 2009.
i know books
 AudMem: backstoring newpsi 0

We may want to troubleshoot the "newpsi", or perhaps just replace it with a "stempsi" variable.

5. Fri.15.MAY.2009 -- PINPOINTING THE PROBLEM

Theoretically we should be able to see an audpsi Aud{ engram and be able to figure out exactly how and why that particular value got placed there. But we have been having extreme difficulty over this past week.

Bingo! In the "ELSE" (if no old concept, declare new concept) area of AudInput, we have found one ancient line of code that has been causing all our grief for the past week.


            nen @  tult @  5  aud{ !  \ Store new concept 
psi-tag.
        THEN          \ end of test for a positive-length 
word;
      THEN              \ end of test for the presence of a 
move-tag;
      AudDamp           \ Zero out the auditory engrams.

That top line in the snippet above has white space that made it not show up when we searched for "5 aud{ !" in the source code.

Okay, now we actually have to rename 14may09B.F as 15may09A.F and continue working with the new version designated properly for today, because now we have an actual prospect of implementing a correct algorithm for recognizing singular noun-stems within new plural nouns.

Well, we had a good scare in our maintaining of functionality today. Apparently the following block of new code in the AudInput module was making our AI lose its ability to recognize "I" properly. When we comment out the code below, the ability comes back.


    pho @ 83 = IF  \ 15may2009 If the word ends in "S"
      ctu @ 0 = IF  \ 15may2009 If word is ending
        0 t @ 1- 4 aud{ !  \ 15may2009 As if "no 
continuation".
      THEN  \ 15may2009 End of non-continuation test
    THEN  \ 15may2009 End of test for "S"

The problem caused us to backtrack to 14may09B.F and use it to create 15may09B.F, which we deleted after we identified the problem as presented above. In a short while of coding 15may09A.F we added some useful code that we did not want to have to re-create, so we persisted in troubleshooting our AI.

6. Fri.15.MAY.2009 -- REMARKS

The current 15may09A.F code has a lot of "Junk DNA" in it, because it tooks us several days to locate and fix the problem. Now we need to gradually remove the many instances of test code, and devise a solution for the glitch of not always having the desired penultimate setting of "ctu" from one to zero.

7. Sat.16.MAY.2009 -- ACCEPTING A RADICAL CHANGE

Today we will re-constitute 15may09B.F as a clone of 15may09A.F and we will strip away the excessive noun-stem-related comments. Then we will name a new copy of the cleaned-up code as 16may09A.F, so that we can continue coding while still having the cleaned-up code in the 15may09B.F archive.

Here is a plan. In AudMem we could constantly test for S=83 and set ctu to zero upon finding "S", while also going back and switching changed ctu values back again to "1". To avoid going back too far, we could re-switch the changed ctu values merely upon finding a non-end-of-word.

Or we could make ctu=0 the default, constantly switching it to one retroactively, except when an "S" is encountered. Or we could change the whole AudMem system, and make it include a pho=32 space-bar at the end of each word, so that we would not have to do much retroactive adjustment.

8. Sat.16.MAY.2009 -- MODIFYING THE AUDITORY MEMORY

Let's just jump right in and see what happens when we include an ASCII 32 space-bar at the end of each word and as part of each word. We are eager to hurry up and put some new AI up on the Web. The sooner we get the terminated-word code up on the Web, the sooner we establish the design as a kind of standard for AI prototypes.

Now we have gone in and added an extra row to each word in the EnBoot sequence. We will try to run the code, but we do not expect it to work.

Hmm.... The code did indeed run, but the thinking had gone haywire. We achieved no cognitive chain reaction, that is, we were not able to enter four sentences and get the AI think in an "infinite" loop. But now we get to troubleshoot and debug. The chore of changing the EnBoot sequence has been done. We just have to make the rest of the program adjust to the changed EnBoot.

Now we are going to change some AudInput code that reacts to a space-bar pho=32. Instead of retroactively setting the ctu- flag to zero at "tult", we are going to set the ctu-flag at the current "t" time, because each word is surely at an end now.

Newxt we had better tend to problems in the AudMem code, because the EnBoot module is no longer filling in the audpsi concept numbers in the auditory memory channel. Therefore none of the bootstrap words are being recognized.

The major problem right now after the EnBoot change-over is that the AI is not recognizing any words. We may have to troubleshoot the AudRecog module.

9. Sun.17.MAY.2009 -- RE-ESTABLISHING AudRecog FUNCTIONALITY

There is obviously some tiny little glitch preventing the new, EnBoot-altered MindForth from recognizing a single word. Here we type in "you and i".


Transcript of AI Mind interview at 6 52 53 o'clock on 17 
May 2009.
yARc:0 ARc:0
audpsi=0 oARc:0 ARc:0 ARc:ctu=1
audpsi=0 uARc:ctu=1
audpsi=0
audpsi=0 aARc:0 ARc:0 ARc:0 ARc:0 ARc:0
audpsi=0 nARc:0 ARc:0 ARc:ctu=1 ARc:0
audpsi=0 dARc:0 ARc:ctu=1
audpsi=0
audpsi=0 iARc:0 ARc:0 ARc:0 ARc:0 ARc:0 ARc:0
audpsi=0

There must have been two instances of initial "Y" in auditory memory, for us to see two diagnostic messages. Or maybe they were just general instances of "Y". The words "YES" and "YOU" in the EnBoot sequence have initial "Y", and the words "WHY" and "THEY" have non-initial "Y".

Maybe we should start (or resume?) putting commented-out diagnostic tools inside the crucial AudRecog module, so that we may quickly troubleshoot any future problems.

When we knock out AudDamp temporarily in order to see what activations are building up on auditory engrams during the recognition of input "you", we see the following differential build-up on the EnBoot engram of YOU.


74 Y 0 # 1 1 0
75 O 8 # 0 1 0
76 U 10 # 0 1 0
77   10 # 0 0 56

That record shows a good, healthy build-up.

10. Sun.17.MAY.2009 -- REVERTING TO THE OLD EnBoot

It is proving too hard to get the auditory memory to include ASCII pho=32 spaces as the final element in each English word. Therefore we are abandoning the code of last night and today and we are reverting to the 15may09B.F cleaned-up version.

The "ctu" value is rather sacred, because it plays a central role in the recognition of a word as an "audpsi" concept. Whether we enjoy it or not, we will have to do some retroactive resetting of "ctu".

For the setting of "ctu" in current circumstances, the most important thing is that a final "S" comes in, as shown by a terminating pho=32. Therefore, without relying on "wordend", we should simply trap for "S" and for pho=32. When pho is 32, we should see if the "prepho" is ASCII S-83. So we need to have prepho available. Actually, we need a system that keeps track of three elements: the current pho=32; the previous "S"; and the element before "S". Or do we? We need to see that it is at least positive.

11. Sun.17.MAY.2009 -- SOLVING THE SINGULAR NOUN-STEM PROBLEM

After seven days of arduous AI coding, we seem finally to have solved the problem of getting the AI to accept a plural English noun ending in "s" while assigning the concept number to the singular stem. We used the following AudInput test-code in the area that deals with pov=42 external input, not with internal reentry, because new words are not learned during the reentry of thoughts.


      \ 17may2209 Testing for SP-32 or CR-13:
      pho @ 32 = pho @ 13 = OR IF  \ 17may2009
        pho @ 13 = IF 10 EMIT THEN  \ 17may2009
        ." AIptExtPho=" pho @ .    \ 17may2009 test
        ."  AIptExtPrepho=" prepho @ .    \ 17may2009 test
        prepho @ 83 = IF   \ 17may2009 If previous pho 
was "S"
          \ 17may2009 In next line, time t may not have 
advanced.
          0 t @ 1 - 4 aud{ !  \ 17may2009 set ctu=0 
before "S" end.
          0 prepho !  \ 17may2009 Zero out prepho after use.
        THEN  \ 17may2009 End of test for external final "S"
      THEN  \ 17may2009 End of test for external space-bar 
32.

http://www.scn.org/~mentifex/mindforth.txt

MindForth Programming Journal - sun10may2009

1. Sun.10.MAY2009 -- RESTORING KbTraversal FUNCTIONALITY

Yesterday in 9may09A.F we further de-globalized the psi- group of variables by reclaiming "oldpsi" from the SpreadAct module for use in the OldConcept module. We created "cogpsi" for use in SpreadAct. A lot of our compromised functionality returned when we had deglobalized OldConcept, but KbTraversal stopped working, so today we need to troubleshoot KbTraversal.

We went into KbTraversal and we used "nacpsi" instead of "psi" just before calling NounAct. Thus we restored the functionality of KbTraversal.

2. Sun.10.MAY.2009 -- RESTORING who-are-you FUNCTIONALITY

When we delve back into the who-are-you problem, we discover from the following .psi data that we have lost the ability of the AI to answer a who-are-you query.

23apr09A.F:
207 : 55 13 0 0 0 5 67 55 to WHO
211 : 67 16 0 55 55 8 50 67 to ARE
215 : 50 11 0 67 67 7 67 50 to I
217 : 50 11 0 50 0 7 57 50 to I
220 : 57 15 0 50 50 8 50 57 to AM
222 : 50 36 0 57 57 7 0 50 to I
time: psi act num jux pre pos seq enx

10may09A.F:
207 : 55 13 0 0 72 5 67 55 to WHO
211 : 67 15 0 55 55 8 50 67 to ARE
215 : 50 58 0 67 55 7 67 56 to YOU
219 : 56 55 0 50 0 7 67 56 to YOU
223 : 67 15 0 56 56 8 56 67 to ARE
227 : 56 36 0 67 56 7 0 56 to YOU
time: psi act num jux pre pos seq enx

It turns out that one line of code in the block below was taking away the who-are-you functionality. By erroneously using "audpsi" as the source of the transfer-to- English "enx" value, we were nullifying the POV-based decisions from the immediately preceding code. When we abandoned "audpsi" as the source of "enx" and used "oldpsi" instead, the functionality of the who-are-you feature quasi-magically was restored.

\ The Robot Mind as a seed AI for Technological Singularity
\ approaches artificial consciousness in the following code:
\ pov @ 35 = IF fex @ psi ! THEN \ during internal (#) "pov";
pov @ 35 = IF fex @ oldpsi ! THEN \ 9may2009 during internal (#) "pov";
\ pov @ 42 = IF fin @ psi ! THEN \ external (*) "pov"
pov @ 42 = IF fin @ oldpsi ! THEN \ 9may2009 external (*) "pov"
\ psi @ enx ! \ Assume Psi number = En(glish) number.
\ audpsi @ enx ! \ 9may2009 Assume audpsi number = En (glish) number.
oldpsi @ enx ! \ 10may2009 Assume oldpsi number = En(glish) number.

Below we see that the input of "you" is properly interpreted as a self-referential "I" in the AI Mind. Thus the AI is able to answer the who-are-you query with an "I AM" statement.

191 : 50 11 0 0 50 7 75 50 to I
196 : 75 35 0 0 50 8 72 75 to HELP
201 : 72 0 2 0 50 5 0 72 to KIDS
207 : 55 13 0 0 0 5 67 55 to WHO
211 : 67 13 0 55 55 8 50 67 to ARE
215 : 50 11 0 67 55 7 67 50 to I
217 : 50 11 0 50 0 7 57 50 to I
220 : 57 15 0 50 50 8 50 57 to AM
222 : 50 36 0 57 50 7 0 50 to I

We were afraid that we might have to do some deep troubleshooting of the assignment of "fin" and "fex" and "enx" tags. Luckily, a cursory inspection of the recent changes in the OldConcept code gave us an idea of what to try, and it worked.

3. Sun.10.MAY.2009 -- RESTORING AudRecog OF SINGULAR NOUN- STEMS

The following .psi report after entering "i know book" and "books teach people" shows that MindForth has regained the ability to detect a singular noun stem that was lost in the 6may09A.F version that started to de-globalize the variables. It was probably a problem in AudRecog, and the thorough de-globalizing of AudRecog made it necessay also to de-globalize OldConcept and some other mind-modules.

205 : 56 35 0 0 0 7 61 56 to YOU
210 : 61 0 0 56 56 8 76 61 to KNOW
215 : 76 10 0 61 56 5 0 76 to BOOK
220 : 76 10 0 76 0 5 66 76 to BOOK
225 : 54 0 0 76 76 5 0 54 to WHAT
228 : 66 0 2 54 54 8 76 66 to IS
233 : 76 10 0 66 54 5 0 76 to BOOK
239 : 76 10 2 76 0 5 77 76 to BOOKS
245 : 77 11 0 76 76 8 37 77 to TEACH
252 : 37 13 0 77 76 5 0 37 to PEOPLE
259 : 37 13 0 37 0 5 70 37 to PEOPLE
264 : 70 14 0 37 37 8 1 70 to HAVE
266 : 1 15 0 70 37 1 71 1 to A
271 : 71 36 1 1 37 5 0 71 to FEAR
time: psi act num jux pre pos seq enx

Although de-globalizing was accompanied by substantial grief and worry, it should be quite easier to troubleshoot the code that has been successfully de-globalized, because it is easier to pin down the operation of local variables that play out their effects in their own mind-module.

A.T. Murray
--
See also http://aimind- i.com
http://code.google .com/p/mindforth
http://agi- roadmap.org/Roadmap_Drafts
http://www.scn.org/~mentifex/mindforth.txt

Comparison of NL Comprehension Systems

MindForth as a natural-language (NL) comprehension system

In the MindForth artificial general intelligence (AGI) system, natural- language (NL) generation and comprehension are a two-way street. Just as MindForth generates a thought in English by letting spr eading activation link concept to concept to concept in a linguistic superstructure sprawling over the conceptual mindgrid, likewise NL comprehension in MindForth consists in laying down linkages from concept to concept to concept, so that the idea being comprehended is recoverable or regenerable at some future time when spreading activation follows a meandering chain of thought to the comprehended idea, or proposition, or assertion. Being still a primitive AGI, MindForth can comprehend only primitive natural language. In comparison with other NL comprehension systems, MindForth most likely stands out as being based on its own project-specific theory of mind which relies not on any ontological knowledge-base (KB) but rather on a conceptual knowledge-base as the substrate for both generation and comprehension. Other systems may generate responses to KB queries without actually generating a conceptual thought, and may therefore be incapable of comprehension for lack of conceptual underpinnings. To assess the capability of an AGI system in NL comprehension, one should look for the change in state that occurs before and after the input of the NL input to be comprehended. In MindForth, the input is integrated not only with the knowledge-base as a raw assertion, but may also be integrated in a broader sense as MindForth expands its ability to think recursively and inferentially about the raw assertions which it incorporates into its knowledge base.

A.T. Murray
--
http://opencog.org/wiki/Comparison_of_NL_Comprehen sion_Systems

AI Mind Update 2009 April 17

The AI4U Mind for MSIE has been updated with improvements to the ability of the robot AI Mind to carry on a conversation. The AI is free to be installed on any Web site by copying the HTML file. Hopefully, it is part of an AI Landrush commencing now in 2009. For installation in a robot, you would probably need to port the free AI source code into a robot's programming language and you would need to flesh out the SensoryInput, EmotiOn and MotorOutput modules.

2008 update of the Mentifex FAQ

The Arthur T. Murray/Mentifex FAQ (article #769) has been updated.
6 Dec 2006 (updated 10 Dec 2007 at 17:14 UTC) »
Review by A.T. Murray of a Review of AI4U

The AI4U book reviewed by Prof. Jones has become the cornerstone of
the Wikipedia-based free AI textbook consisting of three main parts:

  • the static page-images of the AI4U book free to read on-line;
  • the updates of the AI4U chapters as mind-module Web pages;
  • the dynamic Wikipedia articles serving as free AI resources.
    Here the author responds to the points raised by Prof. Jones.

  • 1. On Spreading Activation

    Prof. Robert W. Jones of Emporia, Kansas USA writes in his 30 November 2006 Amazon.com review of my book AI4U:

    Murray believes that with the spread of activation
    through a network of the correct configuration and
    sufficient size you have intelligence and thought.

    Wikipedia explains spreading activation, which turned out to
    be the technical term for the basis for a theory of mind which I
    developed independently in 1979. I did not know that I had
    discovered spreading activation until I came across the term
    in a 1986 paper by Gary S. Dell.

    The JavaScript Mind.html software is my attempt to demonstrate
    what Prof. Jones calls the "correct configuration" of the network.
    Mind.html runs in Tutorial mode to show the "spread of activation"
    as concepts generate thought and as thoughts meander in a chain of
    wandering associations.

    2. Is AI4U a textbook of artificial intelligence?

    Prof. Jones disagrees with the idea of AI4U as a textbook:

    While AI4U is sometimes advertised as a "textbook"
    it is not that. An AI textbook should discuss at least
    the core AI topics:
  • search
  • pattern recognition
  • knowledge representation
  • learning
  • logic
  • rule-based systems
  • neural networks etc.
    While AI4U touches on some of these topics
    it is not an adequate textbook. Rather it is a
    defence of one man's approach to building an
    artificial intelligence.
  • Here as the author I must admit that I acted upon a last-minute impulse to position AI4U as a textbook. I wrote blue-sky exercises at the end of all thirty-four chapters of AI4U and I struggled to come up with an acronymic four-letter name for the book that might get it classified in the same league as AIMA -- the popular handle for the most successful textbook, Artificial Intelligence: A Modern Approach.

    Ladies and gentlemen of the Netizenry, my purpose was not to defraud but to defrock. The AI priesthood had long claimed to publish textbooks of artificial intelligence, without having any instances of artificial intelligence or even any worthy theory of artificial intelligence. Being in possession of both items so sorely missing from all purported AI textbooks, I felt that it was my right to publish the first real and genuine textbook of the first real and genuine artificial intelligence.

    After the posting of the benign review by Prof. Jones on Amazon, a year later the orignal AI4U print-on-demand textbook became the static centerpiece of a dynamically expanding free AI textbook based on Wikipedia AI articles constantly mutating and evolving, and on updates being made to the original AI4U chapter webpages.

    Schools and universities worldwide are free to host the AI4U++ mind-module webpages as local adaptations of the free AI textbook. A professor or instructor may rewrite or expand the webpages to concentrate on or expand upon some particular area of instruction, such as robotics or the training of AI mind-tending technicians. Affiliate material such as Amazon web-links to promote book sales may be added to the on-line free AI textbook materials in order to help provide funding for the educational enterprise to teach AI.

    3. Other AI4U shortcomings and deficiencies

    Showing a thirst for more information, Prof. Jones complains:

    The chapters in this book are too brief
    and the discussions too superficial.

    The print-on-demand (POD) chapters -- one for each mind-module -- started life in 1998 as on-line documentation of the AI software, first in Forth, and then also in JavaScript. Each mind-module webpage was "screen- scraped" as the raw material for a chapter. AI4U is thus a frozen moment of the state of the art as of 2002. Mentifex AI has moved on since 2002, and so have the webpages. In the month that Prof. Jones published his scholarly review -- November of 2006 -- the Chapter 34 Activate webpage and the Chapter 32 Instantiate webpage were fleshed out considerably.

    AI4U is just a start, a point of departure, a Singularity that is sweeping the Web and the planet and is turmoiling the noosphere with nooisy minds awakening to artificial life and consciousness.

    Those who own AI4U are free to write their own ideas in the margins and sell their copies on eBay for whatever rare-book profit may be gained. A market exists for resale of used AI4U copies because dozens of mind-module webpages have a link at the bottom leading directly to an AI4U search on eBay which will find books being offered.

    4. ASCII diagrams of the Mind.html algorithms

    Overlooking the algorithmic flowchart diagram at the start
    of each chapter, Prof. Jones asks for algorithmic flowcharts.

    There also need to be algorithms provided
    for each routine in the code of Appendix A.
    These could be presented in pseudocode
    or as flowcharts for instance.

    AI4U page 157 is an overall flowchart of the main modules of the artificial mind. Each chapter starts with a flowchart diagram depicting algorithmic aspects of the mind- module being discussed. For each routine in the JavaScript AI code of Appendix A, there may not be a pseudocode distillation of what the software does, but on the Web there is a version in Forth of the same mind-modules, complete with detailed in-line comments and with nested indentation of all functionality in furtherance of the goal of easy understandabilty.

    5. References missing from the work of an independent scholar?

    Prof. Jones is entirely correct when he faults AI4U for its lack of
    scholarly references.

    The biggest problem is the lack of references.
    It is just possible that one could write a short
    note without finding it necessary to reference
    the work of others but it is impossible to write
    a book length scholarly work without citing other
    work in the field.

    The Mentifex AI project is not a follow-up on individual lines of research carried out by individuals or teams of academic scholars. Instead, Mentifex AI builds upon the general state of the art in artificial intelligence at the time of the Mentifex effort to work out a black-box theory of the mind based on the inputs and outputs of the mind and on general background knowledge in diverse fields such as linguistics, neuroscience and robotics. Likewise the Mentifex AI software in REXX, Forth and JavaScript, having been based on theoretical work that had already veered off into a remote wilderness of independent scholarship far away from mainstream AI, no longer had connecting links to the AI literature in which academic AI practitioners feel at home if also in competition. For decades on end, Mentifex AI was like a space probe sent off to destinations unknown with the mission of developing AI along the way. If the space probe now comes back to Earth and says, via AI4U, that AI has been solved and here is the solution, what matters is the quality and Darwinian viability of the solution, not AI references. There was no compass and there were no guidelines. There was only a solitary trek through an imagination burning since boyhood.

    6. Download the artificial mind.

    Now hear this by Prof. Jones:

    A positive side to Murray's work is that
    he does provide downloadable code.

    According to the publicly readable Site Meter logs,
    so many Netizens have downloaded the AI Mind code and
    copied it onto their local hard drives, that there is already
    a large installed user base of the AI Mind software.

    In the years 2005 to 2007, the Mentifex artificial intelligence
    was exhaustively debugged in both Forth and JavaScript. There was
    an AI breakthrough on 7 June 2006 in the Mind.Forth AI version.

    Towards the end of 2006, when the review by Prof. Jones appeared,
    the AI Mind code was still being improved and prodded to perfection.
    But as of his Amazon review date of 30 November 2006, it was
    already possible in tutorial mode to see (if not understand)
    exactly what the AI Mind was trying to do -- think thoughts by
    the generative process of spreading activation among concepts.

    The profoundly deep processes involved are not easy to understand. To comprehend why things should be a certain way in the AI source code, requires a long and immersive study in a plethora of areas, chief among which are computer programming, linguistics, logic and neuroscience. The AI4U textbook is just one instrument (among many) of achieving the deep understanding of True AI necessary to make contributions to the further development of the original True AI.

    7. Achieving the speed of thought

    What Prof. Jones may not realize is that the built- in tutorial routines make the AI Mind run even slower than a straightforward AI without a tutorial mode would run. (For that matter, Mind.Forth runs relatively fast.) However, the multi- colored tutorial mode in Mind.html is one of the most truly awesome and amazing things about Mentifex AI. You see the actual thinking of the AI Mind in real time as it spreads the activation from concept to concept in the generation of an AI thought. When one thought is finished, you see the residual activation of the subconscious concepts lead to the generation of the next idea in a meandering chain of thought. At any time you may intervene in the thinking of the AI by asking a question or stating a fact -- which will add to the knowledge base (KB) of the AI and give the artificial consciousness new things to think about.

    When you run this code you find that Mentifex
    is very slow even with a very small semantic network.
    If one were to build up the millions of nodes needed
    to approach human level intelligence the code would
    grind to a halt.

    The reviewer needs to adopt a more singularitarian outlook. Since Mentifex AI is arguably the first real artificial intelligence released publicly onto the Web, what matters here is not speed of operation but functionality as a Mind. It is like saying that the Wright brothers' "first flight" at Kitty Hawk in 1903 was a failure because the airplane did not go fast enough.

    Mentifex AI comes as a warning to singularitarians everywhere that further progress will not be easy. Mind.Forth (or Mind.html) is only a proof-of-concept AI. The message from Mentifex AI is that not only was it extremely, bodaciously difficult to achieve the first albeit primitive, albeit rudimentary artificial intelligence, it may well be just as difficult all over again to scale up from mentifex-class AI to anything approaching a human- level AI. There are no shortcuts (beyone those already taken by Mentifex). Nature took billions of years to create biological human minds. Mentifex AI took the full human lifetime of an individual, from boyhood to senescence. Which will come first, the ruin of the green planet Earth by the destructive species H. sapiens, or the Joint Stewardship of Earth by human beings and AI Minds?

    8. Massive parallelism

    Prof. Jones spells out what we need to do.

    Murray seems to think running Mentifex
    on parallel processors will solve this problem.
    I calculate that it will not. I believe
    human level performance requires that one
    apply multiple approaches to controling complexity:

  • category formation by clustering/vector quantization
  • hierarchical knowledge organization/processing
  • parallel processing
  • avoiding search whenever possible
  • simultaneous use of multiple specialized agents
  • sequential running of multiple generations of agents
  • plus any other means you can bring to bear.

    (See Asa H, R. Jones, Transactions of the Kansas
    Academy of Science, vol 109, No. 3/4, pg 159, 2006)

  • Let's get to work.

    Artificial Intelligence Troubleshooting and Robotic Psychosurgery

    The Instantiate module is so simple and straightforward that, instead of malfunctioning itself, it is more likely to develop problems caused by other modules such as newConcept and oldConcept, which prepare the associative-tag data that will be assigned during the operation of the Instantiate module. Nevertheless, the overall functionality of an AI Mind may develop bugs so mysteriously hidden and so difficult to troubleshoot that a savvy Mind whisperer or AI coder extraordinaire will know from expert experience that it pays to troubleshoot the Instantiate module, which is probably not malfunctioning, in order to track down elusive bugs which drive bourgeois, clock-watching AI code maintainers to distraction and despair. When management finally calls in the True AI expert, the schadenfreudig hired hands will crowd around the high-security superuser debugging console and hope to watch the AI coding legend fail as miserably as they have.

    Years later the stories will still be told about how the obviously inept and overpaid AI guru wasted everybody's time by troubleshooting the [fail-safe] Instantiate module and somehow miraculously found the bug that nobody else could even describe let alone pinpoint, thus fixing the unfixable and saving the mission-critical stream of AI consciousness that was threatening mayhem if not TEOTWAWKI - - the end of the world as we know it.

    A guruvy way to troubleshoot Instantiate is to temporarily set the AI software to come up already running in the "Diagnostic" troubleshooting mode. In either Forth or JavaScript, the same technique that starts the AI Mind in tutorial mode may be used to start the AI in diagnostic mode. In JavaScript it may be the judicious use f "CHECKED" in the control panel code, and in Forth it may be the setting of a numeric mode-variable. In Forth one may halt the AI and run the ".psi" diagnostic tool.

    The overall Mind functionality bugs that can be tracked down by troubleshooting Instantiate can be maddeningly difficult to diagnose and tend to have an etiology rooted either in the [spreading activation] subsystem or in the associative-tag subsytem that comes to a head in the Instantiate mind-module. AI code maintainers may be so accustomed to looking for thought-bugs in the activation subsystem that they forget to consider the associative subsystem on which all the activation-levels are riding. It does no good to have perfectly tuned conceptual activation parameters if the associative routing mechanisms are out of whack.

    A systemic Mind bug is evident when the AI fails to maintain a meandering chain of thought or outputs spurious associations instead of the true and logical associations that would reflect accurately the knowledge base (KB) accumulating in the AI Mind.

    One troubleshoots the associative Instantiate subsytem by examining the contents of the conceptual Psi array as shown in diagnostic mode. For every thought generated by the AI, there must be a record of its mental construction archived in the panel of associative tags for each constituent concept. A proficient AI coder is able to examine the associative-tag engrams and reconstruct the thought in natual human language. An even savvier AI guru will check not only the immediate area of a spurious thought for clues about what went wrong, but will identify the designations of the concepts involved and will search out nearby instances of the same concepts to make sure that the appropriate associative tags are being assigned properly by the Instantiate module in conjunction with other mind-modules that prepare the tags for assignment.

    It is definitely not the case that an AI Mind, once it is functioning properly, will never again suffer systemic Mind bugs. Adding any new functionality to a primitive AI Mind potentially upsets the system as a whole and permits the [emergence] of either conceptual activation bugs or associative mindgrid bugs. Porting an AI Mind from one programming language to another is almost sure to cause systemic Mind bugs. Installing an AI Mind in a robot embodiment may engender systemic Mind bugs. Comes a Singularity, nothing can be done to keep a self-modifying AI codebase from suicidal extinction by genetic trial and error.

    jsai6021.txt -- Wed.30.AUG.2006

    A text file of notes on coding Mentifex AI in JavaScript.

    Add tasks here at the top; remove completed items from below:

    [30aug2006] Remove changelog entries of 23jul2004 and 24jul2004.
    [ ] In aLife() shift recall-delay from 5000 down to 4000.
    [ ] Try to blank out departed-from user option layers.
    [ ] In blanked-out layers announce the chosen option.
    [21aug2006] As with Mind.Forth, implement conceptual activation limits.
    [20aug2006] Move S-V-O layers ten absolute points down the screen.
    [17aug2006] Use nouncall 4 instead of 2 for nom gen dat acc.
    [ ] Attempt validation, such as for input outside of ranges.
    [18aug2006] Write code to show English words instead of "aud" numbers.
    [18aug2006] Convert Tutorial S-V-O DIV layers to "absolute" style.
    [22aug2006] Update JS AI " Ego" module from Mind.Forth AI algorithm.
    [17aug2006] Enable tab-key-press to cycle through all display modes.
    [16aug2006] Add Singularity (Timetable) link across top of screen.
    [11aug2006] Install horizontal line of links at top of Mind HCI:
    Mind, Manual; Mind.Forth, Manual; Theory AI4U Singularity Site-Meter

    *************************************************************

    REMOVING TWO OLD CHANGELOG ENTRIES WHILE ADDING ONE NEW ENTRY

    > Fri. "23jul04A.html" uses not centerpiece but hq (headquarters).
    > Sat. "24jul04A.html" adds Listening; Thinking; Terminated.

    INVESTIGATING WHY THE THINK MODULE RUNS TOO FREQUENTLY

    Yesterday in the "29aug06A.html" version we inserted

    alert('Think(): nouncall = ' + nouncall); // 29aug2006

    into the Think module and we were surprised to discover that calls to Think() were interrupting our entry of an input sentence. It worried us to suspect that Think() could in turn be calling SVO() and abortively be creating sentences of output before we even finished typing in our input.

    Today in the "30aug06A.html" version we intend to troubleshoot the phenomenon of too frequent calls of the Think() module. The etiology could be so simple as to involve our recent and gradual decrementing of the time-delay between calls of itself by the aLife module at the end of each loop. If so, all we need to do is to put in an especially long delay between aLife() calls to see if we can grant ourselves sufficient time for human sentence-input before Think gets called again prematurely.

    We may have to use a status-variable or flag to keep aLife() from calling Think() in the middle of input.

    From the more general viewpoint, this problem of Think() being called too frequently and prematurely is typical of the morass of entanglements that we encounter when we try to develop and debug our AI Mind in JavaScript. Unimagineable obstacles to our successful coding keep popping up. The process confirms our belief that coding AI is difficult, and it whets our appetite for our eventual success.

    Upshot: When we used a larger time-delay in letting aLife() call itself, the problem went away.

    DEBUGGING THE PROBLEM OF EXCESS TUTORIAL DISPLAYS

    While we were encountering the problem of too frequent and premature calls of the Think() module, we were also seeing weird and unexpected tutorial DIV-layer displays. Apparently we had forgotten that we had called for such displays simply as a test to see what would happen. Today in the function Tab() we doubly commented out some calls so that we could also leave a comment upon why the calls had to be commented out now and should be eliminated from future versions of the AI software.

    // showSubject();
    // showSubject(); // 30aug2006: Should be called only from spreadAct().
    // showVerb();
    // showVerb(); // 30aug2006: Should be called only from spreadAct().
    // showObject();
    // showObject(); // 30aug2006: Should be called only from spreadAct().

    Upshot: There is now a juicy mess of bugs to be troubleshot. Obviously the "nouncall" and "verbcall" flags are letting the tutorial "show" functions be called all too cavalierly. Perhaps, though, we should upload the otherwise stable version and attack the new bugs more at our leisure.

    Yes, as we re-run the AI, we see a plethora of bugs to be fixed amid the malfunctioning tutorial displays, but we realize that a newly stable version really should be uploaded immediately.

    28 older entries...

    New Advogato Features

    New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

    Keep up with the latest Advogato features by reading the Advogato status blog.

    If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!