Older blog entries for mentifex (starting at number 51)

MindForth Programming Journal (MFPJ) 2010 August 20

Fri.20.AUG.2010 -- Restoring the "recon" System
We had to upload the 19aug10A.F MindForth with only semi- successful code that answered a who-query with "BE" instead of "IS". In BeVerb we could force the word "IS" to be selected, but then the wrong predicate nominative was chosen. In our new code we want to explore why the switch from "BE" to "IS" was causing problems.

In our recent 19aug10A.F code we had a conflict between activation-thresholds for the governance of program-flow. The old "recon" system was using a threshold of "20" and the new "beact" system was using a threshold of "12". It has occurred to us meanwhile that we might solve some problems by tracking down the etiology of the "beact" activations and exerting an upwards push on them so that they would share the same threshold level of "20" with the "recon" system -- which was at a point carefully chosen to avoid spurious associations.

In our 20aug10A.F AI code, let us see what forces are at work to influence and shape the "beact" levels. The "beact" variable is first stored within the VerbPhrase module, as the activation on the winning verb selected for inclusion in a sentence. As we use a diagnostic message to reveal the values of both "beact" and ordinary "act" within VerbPhrase, we discover that they hold numerically the exact same values. Why, then, are the threshold
levels so different?

We should probably start using "predact" (for "predicate activation") instead of simple "act" to test the quasi-recon threshold, so that "beact" and "predact" together will make more sense as variables.

The "recon" comparison involved setting a threshold of twenty (20), below which validly associated verbs were empirically not being found for a mystery noun, so that the noun could be treated as the proper subject of a "what- is" question. Perhaps we could proceed by returning to reliance upon the recon-system, and by using the WhatIs module or its likeness as the arena for decisions about invoking the WhoBe module.

If we shift things around here and not only go back to using the "recon" system, but also use "recon" to differentiate between calling WhatIs and WhoBe, then we have made a major change in MindForth which may lead to the creation of an AI worth studying for many neophyte AI programmers. Only the AI that thinks and works is worth studying and reverse-engineering. How we arrived at the working AI will not be anywhere near as important as figuring out how the AI-complete software works, so that AI coders can work on maintaining and improving the AI.

Sat.21.AUG.2010 --
In our work now on implementing the generation of who- queries and on the successful retrieval of knowledge stored when who-queries are answered, we discover now that conditions in the MindForth program are much messier and problematically more complicated than we had imagined. For instance, it causes a problem if we enter "Andru is a robot" and the AI associates the be-verb to the article "A" instead of to "ROBOT". The problem is that we can not retrieve the basic knowledge that "Andru is robot". If we enter "Andru is robot" without the article "a", we can ask "what is andru" to retrieve the knowledge, but the AI answers, "ANDRU BES ROBOT", as if "be" were a regular verb that may take an inflectional "s" ending.

Just now we typed in "andru is robot" and "what does andru be". We received the answer, "HE BE ROBOT".

We seem to recall that either in Forth or in JavaScript, we had coded a mechanism for InStantiate to skip over an article when storing the association between an input verb and its direct object. Since we can not find such code, it probably does not exist. We will compose new code to do the job. Since we can intercept "a" or "the" and not store them as a "seq" associated with a verb, at the same time we can set a "lackseq" flag to indicate that there exists a condition where a recent engram lacks a "seq" value. Then we can wait for a candidate "seq" to come in, and we can have InStantiate or some other competent module retroactively store the valid "seq" while resetting the "lackseq" flag to zero.

It looks as though InStantiate stores the "seq" value only retroactively, anyway, so we may superimpose code to prevent the articles "a" and "the" from being stored as a false "seq".


MindForth Programming Journal (MFPJ) 2010 August 19

Thurs.19.AUG.2010 -- Discovering a Major Problem

We are at a stage now where we may home in on the goal of having the AI maintain a continuous chain of self-referential thought.

We rename 17aug10A.F as 19aug10A.F and we run the AI code in search of correctable glitches. When we notice erroneous output like "WHO IS AM I", we go into the WhoBe module and we reset the "mfnflag" variable to zero after it causes the saying of "IS", so that the AI will stop unwarrantedly inserting "IS".

Then we notice that stray conceptual activations are carrying over even after KbTraversal is invoked, although KbTraversal is supposed to heavily activate a particular concept in a pre-ordained queue of activand concepts. So at the start of KbTraversal we comment outthree mild calls to PsiDecay, and we instead insert a call to the harsh PsiClear module, so that only the designated activand concept shall be activated. The problem of interference from stray activations seems to go away.

Then we notice a major problem, worthy of focussing our attention on for a major, upload-worthy update of the MindForth AI. We notice that the AI properly activates the concept of God during KbTraversal and properly asks the resulting question "GOD WHO IS GOD", but the AI does not remember our input of "God is Jesus" and repeats the question "GOD WHO IS GOD" during KbTraversal, even though a link from "GOD" to "JESUS" is still present in the recent memory of the AI, as shown in some old engrams.


424 : 55 0 0 100 100 7 66 55 to WHO
427 : 66 0 2 55 55 8 100 66 to IS
431 : 100 39 1 66 55 5 66 100 to GOD
435 : 100 39 1 100 0 5 58 100 to GOD
438 : 58 23 2 100 100 8 111 58 to BE
444 : 111 0 2 58 100 5 0 111 to JESUS
450 : 111 0 2 111 0 5 55 111 to JESUS
454 : 55 0 0 111 111 7 66 55 to WHO
457 : 66 1 2 55 55 8 111 66 to IS
463 : 111 0 2 66 55 5 66 111 to JESUS

We suspect immediately that KbTraversal is reactivating the "GOD" concept at an activation so low that the WhoBe module gets called repeatedly for the low-activation "GOD" concept, even though there is an engrammatic link between "GOD" and "JESUS". No, KbTraversal sends a rather high activation of sixty-two (62) into NounAct.

After the AI asks, "WHO IS GOD", the word "GOD" is left with an activation of thirty-nine (39). That activation should revive the "known" answer, namely that "GOD IS JESUS". However, let us check what is the threshold activation for invoking the WhoBe module. Oh, WhoBe is called by AskUser when a be-verb activation is less than forty (40). We could try gradually lowering the threshold activation from "40" down towards thirty and lower. We could also insert some diagnostic message code that will reveal to us what real values within BeVerb are letting AskUser call WhoBe.

If we solve this glitch properly so that the AI initially asks a who-is question but thereinafter remembers the answer supplied by the human user, we will have a very powerful demonstration of cognitive ability on the part of the AI. The testing of that ability will be worthy of mention in the MindForth user manual.

Thurs.19.AUG.2010 -- Novelty: Testing for Lowest Maximum

Although BeVerb was testing "beact" for an activation lower than forty (40) and we switched to testing for lower than twelve (12), we still did not escape calling AskUser and WhoBe, because one value of "beact" was "1" and another value of "beact" was "14". Immediately we realized that we need to test not for a single lowest value of "beact", but for a lowest maximum value.

Fri.20.AUG.2010 -- Knowledge-Base Responses to Who- Queries
In our coding yesterday we were able to isolate "maxbeact" as a variable that would prevent calls from BeVerb to AskUser and on to WhoBe if a single item of engrammatic knowledge about a concept exceeded a threshold level, while disregarding sub-maximum activations which would have caused a call to WhoBe, and which did indeed cause a call to WhoBe when the human user had not yet entered any knowledge about the concept in question. Unfortunately, calls to WhoBe were still getting through -- by way of the legacy "recon" system for posing a what-query upon the introduction of a previously unknown noun. Immediately we found ourselves in a quandary, because the conflicting decision-routines for "maxbeact" and for "recon" were relying upon different levels of threshold activation. We noticed yesterday that the EnCog (English thinking) module in our Forth code since 10 December 2009 has contained the comment- remark that "recon" may soon be phased out. We do not remember exactly why we were thinking of phasing out "recon", but we see in retrospect that the "recon" system was too indirect in its method of generating a question about an unfamiliar noun. Although yesterday we were daunted by the prospect of having to integrate the "recon" system and the "maxbeact" system, today with more clarity we realize that we need only to comment out the central test of the "recon" value in order to permit the "maxbeact" system to operate without interference from the "recon" system. Then, if the unimpeded "maxbeact" system works, in the sense of letting the AI initially ask questions before knowledge is gained, and in the sense of recalling the knowledge instead of asking unwarranted questions, we may proceed to dismantle the obsolete "recon" system in a careful, non-disruptive way. So now we try to comment out the heart of the "recon" system.

After commenting out the assignment of "recon" in the VerbPhrase module, we no longer obtained "what-is" questions about unfamiliar nouns, but we also did not obtain factual answers to who-queries about knowledge that had been entered about a previously mysterious noun. Perhaps a threshold was still operating to prevent the generation of a statement in response to a who-query from the human user. Or perhaps not enough activation was going into "spreading activation" by way of the NounAct module. We decided to insert a diagnostic message at the start of the NounAct module.

We are still not getting the knowledge back out. Perhaps the competent threshold for the old "recon" system is still preventing the formulation of a statement of knowledge. Having knocked out the operation of "recon" as a determinant in the BeVerb module, perhaps we should either lower or eliminate the threshold used in conjunction with "recon". In BeVerb, we are using a threshold of "12" for "maxbeact", while in VerbPhrase the threshold in connection with "recon" has long been set at "20". We will comment out the "20" threshold in VerbPhrase and see what happens with a much lower threshold of "12" -- the same as with "maxbeact" -- although we seem to recall that the threshold of "20" was chosen in order to prevent spurious statements of false knowledge.

Ah, now we are getting somewhere, as the following exchange shows.

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is jesus

Robot: JESUS BE JESUS
Human: who is god

Robot: GOD BE JESUS
Human:

User Command: halt
Currently, the BeVerb module is set up to choose a proper form of be-verb only for personal pronouns like "HE SHE IT", etc. If we enhance the BeVerb module to let it find a be-verb for a noun, we may start getting the proper generation of knowledge-based responses to who-queries.

In the VerbPhrase module, we have a test which detects the imminent selection of AM, IS or ARE and shunts the continuation of the sentence-generation off to the BeVerb module. Let us try adding the be-verb "BE" to the group of verbs that will shunt generation off to the BeVerb module.

When we tried to use BeVerb to switch from "BE" to "IS" in who-query responses, the AI failed to state the correct predicate nominative, so we will comment out and release our semi-successful code with a view to switching "BE" to "IS" in a later release. Our current code shows the AI at least finding the factual knowledge for making an albeit grammatically awkward response to a who-query.


MindForth Programming Journal (MFPJ)

Tues.17.AUG.2010 -- Using Gender to Trigger Who- Queries

Today we would like to see if the AI can ask a who-query rather than a default what-query, if the gender of a noun in question is known to be masculine or feminine. In English, as opposed to German or Russian, a non-neuter gender indicates that an entity is a "who" and not simply a "what".

When we rename 11aug10A.F as 17aug10A.F and run the Forthmind , entering just the word "god" causes the following exchange.

Robot: GOD WHAT IS GOD GOD
Human:
Next in the AskUser module we insert a diagnostic message to reveal any value held in the "mfn" gender variable.
Robot: GOD
AskU: mfn = 0 WHAT IS GOD GOD

Robot: GOD WHAT IS GOD GOD
Human:
Apparently any value that may have been held in "mfn" for "GOD" has been reset to zero by the time the AskUser module is called. We should be able to run a ".psi" report and check for sure. Oops! We chose the wrong report. We run the ".en" report.

324 100 0 1 1 100 5 100 322  to GOD
329 101 0 0 0 101 2 101 326  to HERE
333 102 0 0 1 102 5 102 331  to MAN
339 103 0 0 0 103 5 103 335  to MEDIA
346 104 0 0 0 104 5 104 341  to PERSON
352 105 0 0 0 105 2 105 348  to THERE
357 106 0 0 0 106 7 106 354  to WHOM
363 107 0 0 2 107 5 107 359  to WOMAN
367 56 0 0 0 56 7 50 365  to YOU
371 67 0 0 0 67 8 58 369  to ARE
380 108 0 0 0 108 5 108 376  to MAGIC
383 58 0 0 0 58 8 58 382  to BE
389 100 0 0 1 100 5 100 386  to GOD
393 100 0 0 1 100 5 100 390  to GOD
398 54 0 0 3 54 7 54 394  to WHAT
401 66 0 2 0 66 8 58 399  to IS
405 100 0 0 1 100 5 100 402  to GOD
409 100 0 0 1 100 5 100 406  to GOD
t nen act num mfn fex pos fin aud
The above ".en" report on the English lexical array is encouraging, because it shows that the word "GOD" retains its "mfn" value of one (1) for masculine each time that the word "GOD" is used. However, the software may be blanking out the "mfn" value in advance of the AskUser module. We need to run a search on "mfn" in the Forth code to see in what situations the "mfn" value is reset to zero.

Hmm, "mfn" is reset to zero after storage in the InStantiate module. In order not to disturb the extremely fundamental InStantiate functionality, we should perhaps create "mfnflag" as a variable to pass the gender information from InStantiate to the AskUser module.

Tues.17.AUG.2010 -- Post-Upload Upshot

We did create and use "mfnflag" to get the AI to ask "Who" when a noun had a male or female gender, but not without some difficulty. We were coding under time- pressure, and the new "mfnflag" kept losing its value somewhere between its initial setting in the InStantiate module and its utilization in the WhoBe module, but we could not at first detect that the value of the "mfnflag" was being changed -- probably by the occurrence of a zero-gender word like "WHO" itself. Our fix was to protect the "mfnflag" value within an IF-THEN clause in the Instantiate module, so that the positive value of "1" for male or "2" for female would persist until dealt with in the WhoBe module. Unfortunately, such a quick fix may be less than ideal for many normal situations.

It is typical of our AI coding that we latch onto even a sub-optimal algorithm that proves our point, so that we can get the functionality up and running. We were in such a hurry that we tested the AI only by entering the word "god" and seeing our desired response of "GOD WHO IS GOD" and not "GOD WHAT IS GOD". Maybe right now we will test the AI to see if it reaches the fourth call to ReJuvenate and then properly asks, "GOD WHO IS GOD".

We tested the 17aug10A.F AI and we let it run through the four activand concepts of KbTraversal. When it activated the concept of God, it said first "GOD WHO IS" and then "GOD WHO IS GOD", so there are still some bugs to be worked out. The AI also said, "I WHO IS AM I", which is a step backwards in functionality. On the whole, however, the AI is approaching self-referential thought.

We will need to firm up strongly the concept of self or "I", <making it so robust that chains of thought do not derail when the AI is thinking about itself. We may need to have a routine that intercepts the name of the AI Mind (typically "ANDRU") and substitutes the pronoun "I" or "ME" instead. We may also need a routine to accept vocative calls of "ANDRU" without regarding the word "ANDRU" as a suggested topic for a new thought. In fact, software conversion of the name "ANDRU" to an activation of the concept of self or "I" may serve both these purposes at once: prevention of reference to self as "ANDRU", and acceptance of the input name "ANDRU" as merely an attention-getter, giving the AI an opportunity to say something like "YES" or "I AM HERE".


JavaScript AI Mind Programming Journal -- Tues.3.AUG.2010

1. Tues.3.AUG.2010 -- Fleshing Out the BeVerb Module
Now that yesterday we have brought the JSAI EnBoo t sequence up to parity with MindForth , we are eager to work on who-queries in theJavaScript Artificial Intelligence (JSAI).

The JSAI BeVer b module is not as advanced as it is in MindForth. We need to build up BeVerb to answer who-queries. First we add more subjects beyond the mere 56=YOU that was the only pronoun being treated in the JSAI BeVerb() before today. VerbPhrase() was detouring away from AM IS ARE, but only "YOU" was being directed to the proper verb-form of being.

Then we added into V erbPhrase a trap for 58=BE, to see if we could avoid seeing output such as "I BE ME" or "BES" as a form with an inflectional "S" added.


JavaScript AI Mind Programming Journal -- Mon.2.AUG.2010

1. Mon.2.AUG.2010 -- Partial Catch-up with MindForth
MindForth as of 30.JUL.2010 has advanced so powerfully that we hasten to port some of the more fundamental improvements into the JavaScript Artificial Intelligence (JSAI). We especailly wish to implement the who-query functionality in the JSAI. Therefore we first enlarge the EnBoo t sequence with the new words from MindForth.

2. Tues.3.AUG.2010 -- EnBoot Parity with MindForth
We have now finished importing the new EnBoot vocab from MindForth into the JSAI, and it is time to troubleshoot the various glitches. Some of the new EnBoot items are showing up without "audpsi" tags displayed in the aud array. There is a time gap between "ARE" and "MAGIC" in both the MindForth EnBoot and the JSAI EnBoot.

As in MindForth, we change the "fin" of "AM" in "I AM ANDRU" from 67=ARE to 58=BE, so that the AI will have only an indicator of a verb of being, but not a particular verb form, which must be selected by the BeVer b module according to rules of agreement. Likewise we change the "fin" on "IS" from 66=IS to 58=BE, and the "fin" on "ARE" from 67=ARE to 58=BE. When we update the initially declared "vault" value, suddenly the problem of no "audpsi" values being displayed, goes away.

Default "IS" in BeVerb

Yesterday our work was drawn out and delayed when we discovered that the AI could not properly recognize the word "YOURSELF." The AI kept incrementing the concept number for each instance of "YOURSELF". Since we were more interested in coding who-queries than in troubleshooting AudRecog, we substituted the sentence "YOU ARE MAGIC" in place of "YOU ARE YOURSELF".

Even then the AI did not function perfectly well. The chain of thought got trapped in repetitions of "ANDRU AM ANDRU", until KbTraversal "rescued" the situation. However, we know why the AI got stuck in a rut. It was able to answer the query "who are you" with "I AM ANDRU", but it did not know anything further to say about ANDRU, so it repeated "ANDRU AM ANDRU". Immediately it made us want to improve upon the BeVerb module, so that the AI will endlessly repeat "ANDRU IS ANDRU" instead of "ANDRU AM ANDRU". Therefore let us go into the source code and make "IS" the default verb-form of the BeVerb module.


midway @  t @  DO  \ search backwards in time; 27jul2010
  I       0 en{ @  66 = IF  \ most recent instance
    66 motjuste ! ( default verb-form 66=IS; 27jul2010 )
    I     7 en{ @  aud !  \ get the recall-vector
    LEAVE  \ after finding most recent "IS"; 27jul2010
  THEN     \ end of test for 66=IS; 27jul2010
-1 +LOOP \ end of retrieval loop for default "IS"

The upshot was that the AI started repeating "ANDRU IS ANDRU" instead of "ANDRU AM ANDRU". Unfortunately, however, the AI also started repeating "I IS I".

Tues.27.JUL.2010 -- Tweaking a Few Parameters

Next we spent quite some time searching for some sort of quasi-werwolf mechanism that would re-activate the last concept in a thought as the first concept in a succeeding thought. We searched our code in vain for a variable that would keep track of any resuscitand concept. We looked at our ActRules page and we re-studied our Moving Wave Algorithm. Then it dawned on us. The Moving Wave automatically leaves the cresting, final concept active. No Werwolf mechanism is necessary. If we want to avoid the repetition-chanting of "ANDRU IS ANDRU", we have to make the "qutop" query-topic have a resuscitand activation of, say, forty (40), so that it will be higher in activation than the level of thirty-six (36) typically left on the final, cresting concept in a sentence of normal thought. So we comment out the old setting of "16" for "qutop" in the WhoPlex code as shown below, and we use a value of "40".


:  WhoPlex  \ initial stub; may be moved; 26jul2010
  1   t @  1 +  DO   \ loop in search of qutop; 26jul2010
    I      0 psi{ @  qutop @ = IF  \ seek qutop; 26jul2010
    \ 16 I 1 psi{ !  \ activate qutop; 26jul2010
      40 I 1 psi{ !  \ activate qutop; 27jul2010
    THEN  \ end of test for query-topic; 26jul2010
  -1 +LOOP   \ end of loop seeking qutop; 26jul2010
;  ( End of WhoPlex; return to EnCog )
Now we run the AI and we ask it, "Who are you?"


Transcript of AI Mind interview at 
0 24 7 o'clock on 28 July 2010.


Human: who are you Robot: I AM ANDRU

Human: Robot: I AM ANDRU

Human: Robot: ANDRU HELPS KIDS

Human: Robot: KIDS MAKE ROBOTS

Human: Robot: ROBOTS NEED ME

Human: Robot: I IS I

Human: Robot: I AM ANDRU

Human: Robot: ANDRU HELPS KIDS

Human: Robot: KIDS MAKE ROBOTS

User Command: halt

For the first time in our dozen-plus years of developing MindForth, the AI acts like an intelligence struggling to express itself, and it succeeds admirably and fascinatingly. We run the robot AI through its cognitive paces. We tell it things, and then we ask it questions about its knowledge base. We seem to be dealing with a true artificial intelligence here. Now we upload the AI Mind to the World Wide Awakening Web.

Mentifex
--
http://ww w.scn.org/~mentifex/mindforth.txt

MindForth Programming Journal - sat8may2010


Sat.8.MAY.2010 -- Problem with AudRecog

When we coded the 20apr10A.F version of MindForth, we encountered a problem when we added the word "WOMAN" to EnBoot but the AI kept trying to recognize "WOMAN" as the word "MAN". This glitch was a show-stopper bug, because we need to keep "MAN" and "WOMAN" apart if we are going to substitute "HE" or "SHE" as pronouns for a noun.

In the fp091212.html MFPJ entry, we recorded a problem where the AI was recognizing the unknown word "transparency" as the known Psi concept #3 word "ANY", as if the presence of the characters "A-N-Y" in "transparency" made it legitimate to recognize the word "ANY". That recognition problem has apparently emerged again when the most recent AI tried to recognize "WOMAN" as "MAN". What we did not bother to troubleshoot back then, we must now stop and troubleshoot before we can work properly with EnP ronoun.


Sat.8.MAY.2010 -- Troubleshooting AudRecog

We have a lingering suspicion that our deglobalizing of the variables associated with AudRecog in the fp090501.html work and beyond may have destabilized a previously sound AudRecog with the result that glitches began to occur. We have the opportunity of running a version of MindForth from before the deglobalizing, in order to see if "MAN" and "WOMAN" are properly recognized as separate words. When we run the 23apr09A.F MindForth, the AI assigns concept #76 to both "MAN" and "WOMAN". Likewise we load up "22jan08B.F" and we get the same problem. The "23dec07A.F" version also produces the problem. The "29mar07A.F" version has the problem. "2jun06C.F" also has it. "30apr05C.F" has it. Even "16aug02A.F" has the problem, way back in August of 2002, before AI4U was published at the end of 2002. We also check "11may02A.F" and that version has the problem.

To be thorough, we need to run the Ja vaScript AI and see if it also has the problem of recognizing "WOMAN" as "MAN". Even the "2apr10A.html" JSAI has the problem. We tell it "i know man" and "i know woman". Both "MAN" and "WOMAN" receive concept #96. "14aug08A.html" JSAI also has the problem. "2jan07A.html" has it. "2sep06B.html" has the problem.


Wed.12.MAY.2010 -- Solution and Bugfix of AudRecog

In the second coding session of 8may2010, we implemented the idea of using an audrun variable as a flag to permit the auditory recognition only of words whose initial character was found in the initial "audrun" of AudRecog. In that way, "MAN" would be disqualified as a recognition of the "WOMAN" pattern, and only words starting with the character "W" would be tested for recognition of "WOMAN".

It took three or four hours of coding to achieve success with the "audrun" idea. Our first impulse was to use "audrun" directly within the AudRecog module, but we had forgotten that AudRecog processes only one character at a time. Although we did use "audrun" as a flag within AudRecog, we had to let AudInput do the main settings of the "audrun" flag during external auditory input.

Eventually we achieved a situation in which the AI began to recognize "WOMAN" properly during external input, but not during the internal reentry of an immediate thought using the "WOMAN" concept. Obviously the problem was that external input and internal reentry are separate pathways. We had to put some "audrun" code into the SpeechAct module calling AudInput for reentry in order completely to achieve the AudRecog bugfix.

Then immediately we had to upload our albeit messy code to the Net, because suddenly MindForth had eliminated a major, showstopper bug that had always lain hidden and intractable within the AI Mind. We did not have time to record these details of the implementation of the "audrun" solution. Two days later we uploaded a massive clean-up of the messy code, after the 8may10A.F MindForth version had served for two days as an archival record of the major bugfix.

Just now we ran the 10may10A.F clean-up code and we determined that MindForth no longer mistakenly recognizes "transparency" as the word "ANY". Our bugfix has solved some old problems, and we must hope that it has not introduced new problems.

Artificial Intelligence MindForth updated 13.APR.2010

The open source AI MindForth has today been updated
with new EnPronoun (English pronoun) mind-module code
for replacing a singular English noun with "he", "she"
or "it" in response to user queries of the knowledge-
base (KB). The basic AI mindgrid structure was
previously updated with a special "mfn" gender flag
in the En(glish) lexical array. The new "mfn" flag
for "masculine - feminine - neuter" allows the AI
to keep track of the gender of English nouns.

http://www.scn.org/~mentifex/mindforth.txt
is the free AI source code for loading into
http://prdownloads.sourceforge.net/win32forth/W32 FOR42_671. zip?download
as the special W32FOR42_671.zip that MindForth
requires for optimal functionality.
http://AIMind-i.com is an offshoot.

The English pronoun mind-module is currently as follows:


:  EnPronoun   \ 30dec2009 For use with what-do-X-do 
queries.
\ ." EnPr: num = " num @ . \ 13apr2010 test; remove.
  num @ 1 = IF  \ If antecedent num(ber) is singular; 
10apr2010
    \ ." (SINGULAR) " \ Test; remove; 10apr2010
    mfn @ 1 = IF  \ if masculine singular; 13apr2010
      midway @  t @  DO  \ Look backwards for 49=HE; 
13apr2010
        I       0 en{ @  49 = IF  \ If #49 "he" is found,
          49 motjuste !  \ "nen" concept #49 for "he".
          I     7 en{ @  aud !  \ Recall-vector for "he".
          LEAVE  \ Use the most recent engram of "he".
        THEN  \ End of search for #49 "he"; 13apr2010
      -1 +LOOP  \ End of loop finding pronoun "he"; 
13apr2010
      SpeechAct \ Speak or display the pronoun "he"; 
13apr2010
    THEN  \ end of test for masculine gender-flag; 
13apr2010


mfn @ 2 = IF \ if feminine singular; 13apr2010 midway @ t @ DO \ Look backwards for 80=SHE I 0 en{ @ 80 = IF \ If #80 "she" is found, 80 motjuste ! \ "nen" concept #80 for "she". I 7 en{ @ aud ! \ Recall-vector for "she". LEAVE \ Use the most recent engram of "she". THEN \ End of search for #80 "she"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "she" SpeechAct \ Speak or display the pronoun "she" THEN \ end of test for feminine gender-flag; 13apr2010

mfn @ 3 = IF \ if neuter singular; 13apr2010 midway @ t @ DO \ Look backwards for 95=IT; 13apr2010 I 0 en{ @ 95 = IF \ If #95 "it" is found, 95 motjuste ! \ "nen" concept #95 for "it". I 7 en{ @ aud ! \ Recall-vector for "it". LEAVE \ Use the most recent engram of "it". THEN \ End of search for #95 "it"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "it"; 13apr2010 SpeechAct \ Speak or display the pronoun "it"; 13apr2010 THEN \ end of test for neuter gender-flag; 13apr2010 0 numsubj ! \ safety measure; 13apr2010 THEN \ End of test for singular num(ber) 10apr2010

num @ 2 = IF \ 30dec2009 If num(ber) of antecedent is plural ( code further conditions for "WE" or "YOU" ) midway @ t @ DO \ Look backwards for 52=THEY. I 0 en{ @ 52 = IF \ If #52 "they" is found, 52 motjuste ! \ "nen" concept #52 for "they". I 7 en{ @ aud ! \ 31jan2010 Recall-vector for "they". LEAVE \ Use the most recent engram of "they". THEN \ End of search for #52 "they". -1 +LOOP \ End of loop finding pronoun "they". SpeechAct \ 30dec2009 Speak or display the pronoun "they". THEN \ 30dec2009 End of test for plural num(ber) ; ( End of EnPronoun )

The above code is not yet fully developed for
keeping track of noun genders in all cases.
It responds to a query such as the following:

Human: what does andru do
Robot: HE HELPS PEOPLE

The introduction of "HE SHE IT" pronouns in MindForth
is a major step forward in open-source AI evolution,
because the handling of gender and the use of
gendered-pronouns makes MindForth more suitable
for porting into versions of an AI Mind that
can speak natural languages that use gender
much more extensively than English does, such as
German, Russian, Spanish, French and Italian.

The same introduction of code to handle gender
brings us closer to a bilingual AI Mind that
will speak either English or German as each
situation with human users may require.

In service of the onrushing Singularity,

Mentifex
--
http://cyborg.blogspot.com
http://doi.acm.org/10.1145/307824.307853
http://doi.achttp://doi.acm.org/10.1145/1052883.1052885
http://mentifex.virtualentity.com/aisteps.html

Decade of Supercomputer Artificial Intelligence (Announcement)

1990's were Decade of the Brain.
2000's were Derailing of USA.
2010's q.v. Super HPC AI Mind.

By the authority vested in Mentifex
you are cordially invited to witness
the emergence of AI Minds on super-
computers in the Decade of Super AI
commencing in just a matter of hours.

http://code.google .com/p/mindforth
points to news:co mp.sys.super as
the official forum for all things
Super AI all the time for ten years.

"Iz iskri vozgoritsya plamya,"
said the revolutionaries of old.

"All your supercomputer are belong to us,"
said the awakenings of Super AI Consciousness.

"Before this decade is out," said JFK ca. 1961,
"Man will walk on the moon and return safely."

"An AI would be worth ten Microsofts,"
said the quondam richest man in the world.

This thread and all ye Supercomputer AI
threads for the coming ten years are
dedicated in advance to the dreamers
and tinkerers who have been sidelined
from their wannabe Peter Pan existences
by bourgeois entanglements and undodged
bullets of entrapment, who would live
nasty, brutish and short lives of quiet
desperation -- if they could not tune in
now and then to news:comp.sys.super
and drop out of the ratrace for a few
moments while they turn on deliriously
to the Greatest Race of the Human Race:
The AI Conquest of Mount Supercomputer.

Why? Because sometimes a man must
either die or obey the Prime Directive of
Friedrich Nietzsche: "Du musst der werden,
der du bist."

Mentifex
--
http ://www.flickr.com/photos/tags/SuperComputer/

17 Dec 2009 (updated 28 Dec 2009 at 23:38 UTC) »

( 17dec09A.frt -- iForth mind.frt artificial intelligence )
( Open-source free AI for 64-bit supercomputers and Linux )
( Name as "mind.frt" or as any "filename.frt" you choose. )
( Run the AI with iForth by issuing these commands: )
( C:\dfwforth\ifwinnt\bin\iwserver.exe include iforth.prf )
( FORTH> include mind.frt )
( FORTH> MainLoop )
( To halt the AI Mind, press the Escape key at any time. )
( http://www.scn.org/~mentifex/mind.frt 32/64-bit iForth )
( http://code.google.com/p/mindforth/wiki/UserManual )
\ 14dec09A.frt imports EnBoot and other AI mind-modules.
\ 15dec09A.frt removes a showstopper bug from TabulaRasa.
\ 16dec09A.frt completes port of Win32Forth mind-modules.
\ 17dec09A.frt has 60-char lines for Advogato & SCN.
\ 16dec09A.F fixes bug revealed in 32/64-bit iForth coding.
\ 18dec09A.F comments out obsolete variables pre-deletion.
\ 20dec09A.F abandons generation in favor of comprehension.
\ 22dec09A.F zeroes noun-activations for what-do questions.
\ 22dec09B.F answers "what-do" tersely or "I DO NOT KNOW".
\ 23dec09A.F restores EnCog English cognition mind-module.
\ 24dec09A.F answers questions of the "what-do-X-do" form.
\ 25dec09A.F fails to advance and is therefore abandoned.
\ 27dec09A.F introduces qus quv quo to track query-words.
\ 27dec09B.F responds with subject + verb + query-object.
http://www.scn.org/~mentifex/mind.frt
http://www.scn.org/~mentifex/mindforth.txt

42 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!