Older blog entries for mentifex (starting at number 42)

17 Dec 2009 (updated 28 Dec 2009 at 23:38 UTC) »

( 17dec09A.frt -- iForth mind.frt artificial intelligence )
( Open-source free AI for 64-bit supercomputers and Linux )
( Name as "mind.frt" or as any "filename.frt" you choose. )
( Run the AI with iForth by issuing these commands: )
( C:\dfwforth\ifwinnt\bin\iwserver.exe include iforth.prf )
( FORTH> include mind.frt )
( FORTH> MainLoop )
( To halt the AI Mind, press the Escape key at any time. )
( http://www.scn.org/~mentifex/mind.frt 32/64-bit iForth )
( http://code.google.com/p/mindforth/wiki/UserManual )
\ 14dec09A.frt imports EnBoot and other AI mind-modules.
\ 15dec09A.frt removes a showstopper bug from TabulaRasa.
\ 16dec09A.frt completes port of Win32Forth mind-modules.
\ 17dec09A.frt has 60-char lines for Advogato & SCN.
\ 16dec09A.F fixes bug revealed in 32/64-bit iForth coding.
\ 18dec09A.F comments out obsolete variables pre-deletion.
\ 20dec09A.F abandons generation in favor of comprehension.
\ 22dec09A.F zeroes noun-activations for what-do questions.
\ 22dec09B.F answers "what-do" tersely or "I DO NOT KNOW".
\ 23dec09A.F restores EnCog English cognition mind-module.
\ 24dec09A.F answers questions of the "what-do-X-do" form.
\ 25dec09A.F fails to advance and is therefore abandoned.
\ 27dec09A.F introduces qus quv quo to track query-words.
\ 27dec09B.F responds with subject + verb + query-object.
http://www.scn.org/~mentifex/mind.frt
http://www.scn.org/~mentifex/mindforth.txt
Meandering Chain of Thought

MeanderingChain #summary A moving wave of activation wanders across the conscious mind

MileStones RoadMap UserManual


=== Synopsis === 


After the QuIckening of your AGI software and its first GenerationOfThought leading to a CognitiveChainReaction, a meandering chain of thought is not so simple to implement as you may think. A lack of RoBot EmBodiment will prevent your AI Mind from taking its cue for thought from events being perceived through a real-world sensorium. Your AGI can think only about its own memories and about input from you the human user.

== Sensory Deprivation ==

Not having a rich panoply of sensory inputs to think about, your primitive AGI will follow the pathways of SpreadingActivation. As the AGI thinks about each available concept, a chain of thought will snake its way across the MindGrid. You may program the AGI in such a way that it asks a question whenever it tries to think a thought without sufficient information available to complete the idea. By asking a question of the human user or searching the Web, your AGI will learn new information for its knowledge base.

== Machine Learning ==

For traditional AI researches in academia, it has been a Holy Grail to achieve the MachineLearning that will come easy to your AGI. All a machine has to do in order to learn is to ask questions, but ah, there's the rub. What is a machine, that a machine may ask a question, and what is a question, that a mind may answer it?

== Moving Wave ==

You as an AGI programmer, or your AGI underlings as programmers paid to play AGI catch-up with the rest of the world, will be at pains to make sure that only one dominant concept at a time is most active in the AGI mindswirl. Why? Why engineer what would probably happen anyway? Isn't one concept or another always most active simply by definition? Maybe so, but the Moving Wave Algorithm demands it not by accident, but by design. As if a baton were being passed, a sound AGI, with or without a sound RoBot body, will think of one thing at a time and will follow the meadering chain of association from each cresting concept to the next cresting concept. If the AGI engineers are not careful to have the summit of AGI thinking let go of each sparkling concept as it begins to dim and fade away, then the act of thinking will not move forward. A unitary mind must pay unitary attention to a unitary concept. And what else is a unitary mind but a conscious mind?

== ConSciousness ==

A meandering chain of thought presupposes a mechanism of emerging ConSciousness in a machine. Sometimes it is called [http://en.wikipedia.org/wiki/Artificial_consciousness artificial consciousness] as on WikiPedia but self-awareness in man or machine is simply ConSciousness, artifical or not. For a machine, the final tipping point into consciousness is perhaps thoughts of self-reference, thoughts about "I" and "me". A chain of meandering thought must eventually stumble upon the fact of the existence of the thinker, Monsieur Rodin, and so the attainment of SelfReferentialThought will be another milestone on the long march to AGI Minds.

=== OutReach === http://agi-roadmap.org/Meandering_chain_of_thought is an open, collaborative page where you may contribute your own ideas and use the associated Talk page for discussion.

=== AI For You ===

Click to run

AI For You


=== Memespace ===

AdminisTrivia AiEvolution AiHasBeenSolved AiTree BrainTheory CognitiveArchitecture ComPutationalization ConSciousness DiaSpora DisAmbiguation EmBodiment EnTelEchy ForthMindTextFile GenerationOfThought HardTakeoff HumanLevel ImMortality IndustrialEspionage InFerence JavaScript JointStewardship KbSearch KbTraversal KnowledgeBase MachineLearning MasPar MeanderingChain MetEmPsychosis MileStones MindModule MovingWave NaturalLanguageProcessing OutReach PermanentInstallation PortingOfCode ProliFeration ProsperityEngine QuIckening RecursiveSelfEnhancement ReJuvenate RoadMap RoBot RumorMill ScienceMuseums SeedAi SelfReferentialThought SemanticMemory SeTi SpreadingActivation SubConscious SuperComputer SuperIntelligence TechnologicalSingularity TelePresence TimeLine UserManual VpAi WikiPedia

labels: milestone roadmap

Cognitive Chain Reaction for AI Troubleshooting

CognitiveChainReaction
#summary Ideas looping in an endless cycle

MileStones RoadMap UserManual


=== Definition === 


A cognitive chain reaction (CCR) in a nascent AGI can be defined as a series of three or more natural-language thoughts which, in the absence of cognitive distractions, enter into an apparently infinite loop. By the process of SpreadingActivation, each thought in the loop leads to the next thought, which in turn leads to the next, and to the next, and so on, _ad infinitum_.

=== Attainment ===

Long after the nascent AGI has achieved QuIckening, and shortly after the AI Mind has achieved GenerationOfThought, an AGI coder will induce a cognitive chain reaction by entering a series of looping ideas such as the following.

{{{ Cats eat fish. Fish eat bugs. Bugs eat germs. Germs kill cats. }}}

Upon entry of the last of the looping thoughts, the AGI will associate from the last word entered back to the start of the loop, and will begin an endless repetition of the loop.

=== Purpose ===

The cognitive chain reaction serves mainly as a troubleshooting device, to make sure that after changes to the AGI source code, the AI Mind is still able to think. Rather than labor to dream up items of novel input to test the AGI, the programmer enters a tried-and-true series of looping thoughts for a quick assurance that the associative mechanisms still work properly.

The actually quite mundane purpose of the cognitive chain reaction is not to achieve any sort of gee-whiz "wow!" effect, but simply to verify that the AGI can think in a repeatable set of circumstances.

=== Upshot ===

In practice with the MindForth AGI, while being a valuable diagnostic tool, the CCR has at times generated weird but still acceptable results. Until the final thought in the loop is entered, the AGI typically greets each new idea with a question, such as "BUGS...WHAT ARE BUGS?" Such a response is actually quite sophisticated, because the AGI starts to generate a thought about "BUGS" but does not know any verb for completion of the thought. A module kicks in to ask a question about the new concept about which the AGI has no data in its KnowledgeBase.

Because the chain of thought spreads or loops only if there is sufficient activation to keep it going, sometimes there is a kind of conceptual "hiccup" in the infinite loop, where the AGI pauses to ask a question about one of the words and then answers its own question by continuing the loop. To the AGI programmer, such behavior is an indication that the pre-programmed parameters of conceptual activation may need some tweaking or refining. Typically, the coder is so eager to get on with other tasks that the tweaking is put off for a later date.

Another weird result occurs when the AGI cannot recognize a word like "fish" as being in the plural. The AGI will tend to say "FISH EATS BUGS" as part of the loop, instead of "FISH EAT BUGS." But the loop continues.

If you are demonstrating MindForth or a similar AGI in ScienceMuseums or at a robotics club, it is instructive to show that that the cognitive loop may be interrupted at any time by novel input from a human user.

=== OutReach === http://agi-roadmap.org/Cognitive_Chain_Reaction is an open, collaborative page where you may contribute your own ideas and use the associated Talk page for discussion.

=== MemeSpace ===

AdminisTrivia AiEvolution AiHasBeenSolved BrainTheory CognitiveArchitecture ComPutationalization ConSciousness DiaSpora DisAmbiguation EmBodiment EnTelEchy ForthMindTextFile GenerationOfThought HardTakeoff HumanLevel ImMortality IndustrialEspionage InFerence JavaScript JointStewardship KbSearch KbTraversal KnowledgeBase MachineLearning MasPar MeanderingChain MetEmPsychosis MileStones MindModule MovingWave NaturalLanguageProcessing OutReach PermanentInstallation PortingOfCode ProliFeration ProsperityEngine QuIckening RecursiveSelfEnhancement ReJuvenate RoadMap RoBot RumorMill ScienceMuseums SeedAi SelfReferentialThought SemanticMemory SeTi SpreadingActivation SubConscious SuperComputer SuperIntelligence TechnologicalSingularity TelePresence TimeLine UserManual VpAi WikiPedia

labels: debug milestone roadmap

Generation of thought in MindForth AI

GenerationOfThought
#summary The process by which a mind generates and, in reverse, comprehends a thought.

AiEvolution MileStones RoadMap UserManual


=== What is thought? === 


In an artificial or natural mind, thought is the conscious process of naming or imagining concepts in a chain of association by SpreadingActivation from concept to concept.

Because thinking is a conscious activity, each thought emerges as a separate reality from, and as an addition to, the KnowledgeBase (KB) which provides the fuel for thought. A knowledge base in an artificial general intelligence (AGI) is not a static compendium of facts and relationships, but is rather a dynamic, constantly shifting grid of conceptual identifiers (words; images) and the growing body of propositions asserting relationships among the concepts.

=== How does an AGI think? ===

Spreading activation becomes thought in an AGI if a linguistic superstructure "rides the wave" of associations and consciously names each concept in the chain of association. In the SubConscious mind, activation spreads not as thought but as a backdrop to emerging thought. The unity of mind and ConSciousness -- the unity instantiated as self -- requires that only one thought at a time expresses itself as ideation above the teeming, roiling caldron of concepts and memories clamoring for the attention of consciousness.

=== Embodied thought ===

If every AGI were created not simply on a computer ''qua'' computer but on a computer ''qua'' brain of a robot, the sensorium and motorium of the robot would make it easier to initiate and sustain each thought emerging from the conceptual mindgrid. Sensory input would spark the activation of concepts and their attendant images, engendering a stream of thought amid the stream of consciousness.

If the robot-builders of this world can be thought of as cowboys, and if the AGI entrepreneurs in many ways are farmers, then the cowboys and the farmers should be friends. The cowboys with their monstrous, clanking contraptions must be thinking, "If I only had a brain." The farmers, with their "Seed AI" and their server farms, are afraid that people will say they're in love.

In both cases, especially amateur robotics and amateur AGI, lack of funding prevents holy MatrixMoney between the ghost in the machine and robotic embodiment. Therefore the first thoughts of the first True AGI specimens occur in computers bereft of bodies.

=== Disembodied thought ===

If we may use MindForth as an example because MindForth has already achieved thought, we see that disembodied thought must contend with a unique set of problems and circumstances. Whereas a RoBot has the world at its disposal for the initiation and maintenance of a MeanderingChain of thought, an AGI with no body has only user input to start the chain of associations flowing in a manifestation of thought. "Good enough," you might think, but what happens to the conscious thinking of the AGI if the human user walks away from the keyboard and stops entering input? What we have here is a failure to communicate, which can cause failure in the primitive AGI. MindForth compensates for the absence of a human thought- provoker by means of a special mind-module for knowledge base traversal. KbTraversal kicks in after a set period of no outside communication, and reactivates concepts held in the English bootstrap of the AGI Forthmind. KbTraversal does not reactivate thoughts. It only reactivates various concepts which may serve as the triggering mechanism for a wide variety of thoughts, depending on the contents of the KB.

Other mechanisms to facilitate disembodied thought may include an AGI feature of asking a question about any new word entered by the human user but not yet known to the emerging AGI. Such a question-asking mechanism is not so arbitrary and needlessly artificial as it may seem. When the MindForth AGI encounters a previously unknown English noun, it tries to generate a sentence of thought using the new noun as the subject of the sentence. For instance, upon first introduction of "books," it may say, "BOOKS... WHAT ARE BOOKS?" The first instance of "BOOKS" is actually the attempted generation of a sentence, the formation of a thought in the artificial mind. But the thought fails and is aborted, because SpreadingActivation cannot flow from "BOOKS" to any verb known in association with "BOOKS". Then a special module kicks in to ask a a question about the mysterious new word. Such a module facilitates achieving one of the "Holy Grail" goals of AGI -- MachineLearning (ML).

=== OutReach === http://agi-roadmap.org/Generation_of_thought is the open, collaborative page where you may contribute your own ideas and use the associated Talk page for discussion.

=== MemeSpace ===

AdminisTrivia AiHasBeenSolved AiMind BrainTheory CognitiveArchitecture CognitiveChainReaction ComPutationalization ConSciousness DeBug DisAmbiguation EmBodiment EnArticle ForthMindTextFile GroupThink HumanLevel InFerence InPut IntelligenceQuotient KbSearch KbTraversal KnowledgeBase MachineLearning MasPar MeanderingChain MileStones MindForth MindGrid MindMeld MindModule MovingWave NaturalLanguageProcessing OldestLivingAiMind OutPut OutReach PermanentInstallation PortingOfCode ProliFeration ProsperityEngine PsychoSurgery QuIckening RecursiveSelfEnhancement RoadMap RoBot SeedAi SelfReferentialThought SemanticMemory SloshOver SpreadingActivation SubConscious SuperIntelligence SuperStructure SynTax UserManual

labels: milestone roadmap

Quickening


#summary The milestone from which an artificial 
intelligence lives potentially forever 

AiEvolution

MileStones ReJuvenate 

RoadMap

UserManual


=== Definition ===

_Quickening_ is the stage in AGI development when the main software module of the AGI comes alive as a program that runs continuously. We derive the term quickening from the notion of the quick and the dead as opposites, and from the idea of the quickening in the womb, when an expectant mother begins to feel the movements of the baby growing within her.

AGI development is different from other software development. If you write a program that performs a single operation and then stops, the software does not quicken and come alive. If you write a chess program that answers your every move with a move of its own, the software does not quicken, because the software does not hum with activity while it waits for you. If you write a home-monitoring software system that constantly checks your burglar-alarm system and the temperature and the presence or absence of smoke in the air, that software has not truly quickened because it is only passively waiting for something to happen, like a word-processor waiting for you to type in a word.

When you are creating an artificial general intelligence (AGI), at some point the software has to quicken by running indefinitely. An AGI has to iterate endlessly through its MainLoop or the equivalent thereof. It must not run one time and then stop. It must be a form of artificial life (alife).

The early milestone stage of Quickening is not hard to achieve, but it does draw out from you some careful planning, such as in the area of what sort of modules you want to be the constituent parts of your fullblown AGI. To get your AGI software to quicken, you only need to stub in most of the modules and to elaborate at least one of the modules to get the AGI to exhibit some rudimentary behavior that will demonstrate the quickening. Any module for the user interface is a good one to code more profoundly than the mere stubs, because the user interface makes it obvious that something is happening inside the AGI. The event happening in the AGI may not yet be thinking or self- awareness or superintelligence, but you may have the user interface do something like counting elapsed time or declaring various system parameters such as available memory space or open channels of communication. As in human evolution, where ontogeny recapitulates phylogeny and therefore a baby in the womb goes through fishlike stages that disappear before a human being is born, your neonatal AGI may include software elements that you will remove as the AGI matures. Beware, however, of one trap. Do not let the performance of the AGI software be dependent on the presence of input from a human user. Your emerging AGI is not some unintelligent, idiot word- processor. Only give the human user a window of opportunity to enter input or to communicate somehow (send a tweet?) with the machine intelligence. Slam the window shut periodically and let the AGI do its own thing for a few looping cycles. Then check again for user input. Let nothing be a show-stopper. Eventually your AGI, or somebody's AGI, is going to outlive all human beings currently alive on this planet. Get used to it, get over it, and get on with it.

=== Variations ===

Normally the software of an artificial mind a first time and then many other times in the course of trial runs, especially during PortingOfCode from one programming language to another. If the AiMind were coded in a Dylanesque dynamic language that can be changed on the fly, conceivably an AI could quicken at a young age and continue running even as modifications were performed on the running code.

Although Quickening starts when the Mind program runs indefinitely, Quickening fails if the AiMind runs out of memory space. As a mind designer you have two choices: A) provide infinite memory; or B) provide infinite looping. If you are a deity designing human beings, you only need to provide enough memory for a lifetime. You may be designing a potentially immortal AiMind, in which case, God, that's a hard one. You work wonders in mysterious ways, but we mortal human beings must design our immortal successor species more pragmatically. Since the MainLoop itself does not have infinite memory at its disposal, we use the ReJuvenate module to loop through a finite memory space as if it were immortal, I mean, infinite. The larger the memory space in the AiMind, the less frequently the ReJuvenate module has to kick in. If computer memory becomes really cheap and plentiful, or if the PermanentInstallation of a mission-critical AI entity warrants astronomical quantities of memory no matter what the cost, an AiMind could dispense with the ReJuvenate module and not worry about running out of memory in the far and distant future long after the first stirring and quickening of the AI Mind.

=== OutReach === http://agi-roadmap.org/Quickening is the open, collaborative page where you may contribute your own ideas and use the associated Talk page for discussion.

=== MemeSpace === AdminisTrivia AiEvolution BottomUp CodeComplete ImMortality LifeQuaScienceFiction MindGrid PermanentInstallation ReJuvenate TippingPoint UserManual

labels: milestone roadmap

MileStones


MileStones
#summary Stages of development already achieved and yet to 
be achieved by MindForth 


AiEvolution AiHasBeenSolved RoadMap TimeLine

An artificial general intelligence (AGI) must pass through various stages of development on the way from the start of an AGI project to a fully realized AGI.

== Already Achieved == === QuIckening === === GenerationOfThought === === CognitiveChainReaction ===

=== MeanderingChain of Thought ===

=== SelfReferentialThought ===

== Yet to be Achieved ==

=== EmBodiment in a RoBot ===

=== PortingOfCode ===

=== ConSciousness ===

=== PermanentInstallation ===

=== DiaSpora ===

=== SuperComputer Installation ===

=== MetEmPsychosis ===

=== ProliFeration ===

=== MasPar ===

=== HumanLevel ===

=== RecursiveSelfEnhancement ===

=== SuperIntelligence ===

=== TechnologicalSingularity ===

== OutReach ==

http://agi-roadmap.org/Milestones

== Memespace ==

CognitiveArchitecture DreamTeam HardTakeoff ImMortality IndustrialEspionage JointStewardship LandRush OldestLivingAiMind OpenSource OpenSource ScienceMuseums SeTi TippingPoint UserManual VentureCapital VpAi WikiPedia

Labels: future overview roadmap

20 May 2009 (updated 20 May 2009 at 19:43 UTC) »
MindForth Programming Journal - wed20may2009

1. Wed.20.MAY2009 -- AI MINDS FOR CONSCIOUS ROBOTS

So many robots need to have an AI Mind installed, and since MindForth is tantamount to the VisiCalc of artificial intelligence, that we now rush to add feature after feature to the budding robot AI. Recently we made MindForth able to InStantiate a singular noun upon encountering a plural English noun in the auditory robotic input stream. If you tell the robot AI4U Mind something about birds, it now sets up the singular noun "bird" as a concept. Then we encoded an algorithm of assuming from input of the article "a" that the next noun after "a" is a singular noun. If you say that you wish to manufacture "a conscious robot", the article "a" sets a flag that skips the adjective "conscious" and assigns the singular "num (ber)" to whatever noun (e.g., "robot") comes next. (And with AI4U technology we are indeed helping you to manufacture a conscious robot.) Next we need to ensure that the emergingly conscious AI Mind will use the right form of an English verb when, for example, "it talks" about a singular noun. Simply put, the software "needs" to put "s" on the end of a verb after a third-person singular noun.

2. Wed.20.MAY.2009 -- THIRD-PERSON SINGULAR VERB INFLECTION

The "nphrnum" variable is set in the NounPhrase module and keeps track of whether a noun is singular or plural. The "vpos" variable is set in VerbPhrase and is used in the following SpeechAct code.


pho @ 32 = IF  \ 20may2009 If engram is a blank space...
  vpos @ 1 = IF    \ 20may2009 If a verb is being spoken
    nphrnum @ 1 = IF  \ 20may2009 If subject is singular
      subjpsi @ 50 = NOT IF  \ 20may2009 If subject not "I"
        subjpsi @ 56 = NOT IF  \ 20may2009 If not "YOU" 
talking
          83 pho !  \ 20may2009 Insert inflectional "S" pho.
          1 spacegap !  \ 20may2009 Prepare one space-gap.
          0 vpos !    \ 20may2009 Reset after use
          0 nphrnum !  \ 20may2009 Reset after use.
        THEN  \ 20may2009 End of test to avoid subject "YOU"
      THEN  \ 20may2009 End of test to avoid subject "I"
    THEN  \ 20may2009 End of test for a singular subject
  THEN  \ 20may2009 End of test for a verb being spoken
  pho @ EMIT  ( say or display "pho" )
  1 audstop !  \ A flag to stop SpeechAct after one word
THEN \ 1jan2008 One last call to Audition
35 pov !  ( internal point-of-view ASCII 35 "#" like 
mindgrid )
AudInput    ( 16oct2008 for reentry of a thought back into 
the mind )
audstop @ 1 = IF  \ 20may2009 Adding one pho=32 space bar
  spacegap @ 1 = IF  \ 20may2009 If an "S" has been added...
    32 pho !  \ 20may2009 Carry pho=32 "space" into AudInput
    AudInput  \ 20may2009 For the reentry of one space.
    0 spacegap !  \ 20may2009 Reset spacegap after use.
  THEN   \ 20may2009 End of test for spacegap of one space.
  LEAVE  \ 20may2009 Abandon the looping through auditory 
memory
THEN  \ 1jan2008 Having spoken one word.

The above code not only adds an "S" to a standard English verb being used in the third person singular, but also causes the proper reentry of the inflected verb form back into the AI Mind. Whereas only the stem of the verb is retrieved from auditory memory, after the addition of "S" for inflection during thought, the inflected form of the verb now enters the auditory memory.

http://www.scn.org/~mentifex/mindforth.txt
http://mentifex.virtualentity.com/m4thuser.html

MindForth Programming Journal - mon11may2009

1. Mon.11.MAY2009 -- ATTACKING THE SINGULAR-STEM PROBLEM

Today we would like to work on getting the AI to recognize both singular stems and plural forms of a standard English noun. Perhaps we will start out by trying to see if we can have the AI instantiate nouns ending in "s" while going back and assigning the "audpsi" ultimate tag to the penultimate phoneme which is the end of the singular stem.

Looking at the AudMem code, we realize that we need to get several new influences in there to cause the AudMem module to dip back one unit in time and assign the "audpsi" value to the penultimate phoneme that marks the end of the stem. We want the word to be a noun and to be ending in "s". We might get away with disregarding whether the word is a noun. We could just look for all words ending in "s" and then we could plunk down the "audpsi" ultimate tag not only on the final "s" phoneme but also on the penultimate phoneme. Serendipitously, in this way we would also manage to tag the stem of a third-person singular verb with an audpsi, as in, "He works," where a penultimate audpsi would identify the concept. So we get out our ASCII chart and we see that uppercase "S" is "83" in ASCII. We will test the end of words for an "S" and assign the "audpsi" value to both the final and the penultimate phoneme.

2. Tues.12.MAY.2009 -- SPOOFING THE INPUT STREAM

We will try now to achieve singular-stem recognition from plural nouns not only by putting an audpsi ultimate-tag on the penultimate phoneme, but also by setting the "ctu" continuation flag to zero ("0") in the penultimate position, so that our software will "think" that it has recognized a whole word instead of just a stem inside a noun-plural.

In the AudInput module, we have been putting our new code into the area for internal reentry. Perhaps the new code belongs in the area for external input.

Hey, perhaps all this new code should be in NewConcept, because we are trying to deal with previously unknown noun-stems.

Gee, we might try something really radical. We have created a variable


variable newpsi   ( 12may2009 for singular-nounstem 
assignments )

so that we can be sure to assign "ctu=0" and "audpsi" only to noun-stems of a plural word coming in during user input. It might be quite radical, but useful, to put the "newpsi" value just before all incoming "S" phonemes, not just final, end-of-word "S" phonemes. We would assume that such assignments would not cause any problems for "ctu=1" word-engrams. Then, when the word-engram was finalized, we could go back in and set the "ctu=0" value to permit future recognition of the noun- stem.

3. Wed.13.MAY.2009 -- ALMOST CODE-COMPLETE

The variable "newpsi" obtains no positive (above zero) value until NewConcept is called -- which happens when?

With a new "wordend" variable we finally achieved the basic functionality that we have been seeking for the past three days, but with a few minor glitches. We used the following AudMem code.


    \ 13may2009 In AudMem as called towards end of AudInput:
    pov @ 42 = IF  \ 12may2009 Only during external input
      pho @ 83 = IF  \ 12may2009 If phoneme is "S"


\ CR ." S pho & newpsi = " pho @ . ." " newpsi @ . \ 12may2009 test \ ." time t = " t @ . \ 12may2009 test newpsi @ t @ 1- 5 aud{ ! \ 12may2009 pre-"S" audpsi

wordend @ 1 = IF \ 13may2009 If word has ended CR ." audpsi = " audpsi @ . \ 13may2009 a test. \ audpsi @ 0 = IF \ 13may2009 Change ctu only for new words. 0 t @ 1- 4 aud{ ! \ 13may2009 As if "no continuation". \ THEN \ 13may2009 \ 13may2009 End of test for known word. THEN \ 13may2009 End of test for end of word 0 newpsi ! \ 12may2009 Reset for safety. THEN \ 12may2009 End of test for "S" THEN \ 12may2009 End of test for external input.

In an upcoming other version of MindForth, we need to overcome the minor glitches. One glitch is that the AI is setting "ctu" to zero on both the penultimate and ultimate array-row of a plural word that has just previously been learned. We would prefer that the known plural word only have ctu=0 in the final row. Another glitch is that the new code is working only after a previously unknown verb is used. It should be relatively simple to remove that particular glitch.

4. Thurs.14.MAY.2009 -- STUMPED AND STYMIED

Of the two glitches we need to work on, the more important one, and also probably easier to solve, is the problem of the new code not working with a known verb from the EnBoot English bootstrap.

In the following transcript, the new stem-rec code does not work after we use the English bootstrap verb "know".


Transcript of AI Mind interview at 7 39 3 o'clock on 14 May 
2009.
i know books
 S pho & newpsi = 83   0 time t = 215


Robot: BOOKS WHAT ARE BOOKS

When we use the previously unknown verb "use" in the following transcript, the stem-recognition code works just fine. Why? What is the difference between using an old or a new verb? Here we say "i use books" to the AI Mind.


Transcript of AI Mind interview at 7 40 7 o'clock on 14 May 
2009.
i us
 S pho & newpsi = 83   0 time t = 207 e books
 S pho & newpsi = 83   77 time t = 214
 audpsi = 0


Robot: BOOKS WHAT ARE BOOKS

There must be a hidden influence in either OldConcept, or NewConcept, or both, because one or the other module is invoked for the verb, depending upon whether the verb is "old" or "new."

By means of some diagnostic code in AudMem, we have just learned that the "newpsi" variable has a value of zero after a verb from the English bootstrap.


Transcript of AI Mind interview at 21 8 34 o'clock on 14 
May 2009.
i know books
 AudMem: backstoring newpsi 0

We may want to troubleshoot the "newpsi", or perhaps just replace it with a "stempsi" variable.

5. Fri.15.MAY.2009 -- PINPOINTING THE PROBLEM

Theoretically we should be able to see an audpsi Aud{ engram and be able to figure out exactly how and why that particular value got placed there. But we have been having extreme difficulty over this past week.

Bingo! In the "ELSE" (if no old concept, declare new concept) area of AudInput, we have found one ancient line of code that has been causing all our grief for the past week.


            nen @  tult @  5  aud{ !  \ Store new concept 
psi-tag.
        THEN          \ end of test for a positive-length 
word;
      THEN              \ end of test for the presence of a 
move-tag;
      AudDamp           \ Zero out the auditory engrams.

That top line in the snippet above has white space that made it not show up when we searched for "5 aud{ !" in the source code.

Okay, now we actually have to rename 14may09B.F as 15may09A.F and continue working with the new version designated properly for today, because now we have an actual prospect of implementing a correct algorithm for recognizing singular noun-stems within new plural nouns.

Well, we had a good scare in our maintaining of functionality today. Apparently the following block of new code in the AudInput module was making our AI lose its ability to recognize "I" properly. When we comment out the code below, the ability comes back.


    pho @ 83 = IF  \ 15may2009 If the word ends in "S"
      ctu @ 0 = IF  \ 15may2009 If word is ending
        0 t @ 1- 4 aud{ !  \ 15may2009 As if "no 
continuation".
      THEN  \ 15may2009 End of non-continuation test
    THEN  \ 15may2009 End of test for "S"

The problem caused us to backtrack to 14may09B.F and use it to create 15may09B.F, which we deleted after we identified the problem as presented above. In a short while of coding 15may09A.F we added some useful code that we did not want to have to re-create, so we persisted in troubleshooting our AI.

6. Fri.15.MAY.2009 -- REMARKS

The current 15may09A.F code has a lot of "Junk DNA" in it, because it tooks us several days to locate and fix the problem. Now we need to gradually remove the many instances of test code, and devise a solution for the glitch of not always having the desired penultimate setting of "ctu" from one to zero.

7. Sat.16.MAY.2009 -- ACCEPTING A RADICAL CHANGE

Today we will re-constitute 15may09B.F as a clone of 15may09A.F and we will strip away the excessive noun-stem-related comments. Then we will name a new copy of the cleaned-up code as 16may09A.F, so that we can continue coding while still having the cleaned-up code in the 15may09B.F archive.

Here is a plan. In AudMem we could constantly test for S=83 and set ctu to zero upon finding "S", while also going back and switching changed ctu values back again to "1". To avoid going back too far, we could re-switch the changed ctu values merely upon finding a non-end-of-word.

Or we could make ctu=0 the default, constantly switching it to one retroactively, except when an "S" is encountered. Or we could change the whole AudMem system, and make it include a pho=32 space-bar at the end of each word, so that we would not have to do much retroactive adjustment.

8. Sat.16.MAY.2009 -- MODIFYING THE AUDITORY MEMORY

Let's just jump right in and see what happens when we include an ASCII 32 space-bar at the end of each word and as part of each word. We are eager to hurry up and put some new AI up on the Web. The sooner we get the terminated-word code up on the Web, the sooner we establish the design as a kind of standard for AI prototypes.

Now we have gone in and added an extra row to each word in the EnBoot sequence. We will try to run the code, but we do not expect it to work.

Hmm.... The code did indeed run, but the thinking had gone haywire. We achieved no cognitive chain reaction, that is, we were not able to enter four sentences and get the AI think in an "infinite" loop. But now we get to troubleshoot and debug. The chore of changing the EnBoot sequence has been done. We just have to make the rest of the program adjust to the changed EnBoot.

Now we are going to change some AudInput code that reacts to a space-bar pho=32. Instead of retroactively setting the ctu- flag to zero at "tult", we are going to set the ctu-flag at the current "t" time, because each word is surely at an end now.

Newxt we had better tend to problems in the AudMem code, because the EnBoot module is no longer filling in the audpsi concept numbers in the auditory memory channel. Therefore none of the bootstrap words are being recognized.

The major problem right now after the EnBoot change-over is that the AI is not recognizing any words. We may have to troubleshoot the AudRecog module.

9. Sun.17.MAY.2009 -- RE-ESTABLISHING AudRecog FUNCTIONALITY

There is obviously some tiny little glitch preventing the new, EnBoot-altered MindForth from recognizing a single word. Here we type in "you and i".


Transcript of AI Mind interview at 6 52 53 o'clock on 17 
May 2009.
yARc:0 ARc:0
audpsi=0 oARc:0 ARc:0 ARc:ctu=1
audpsi=0 uARc:ctu=1
audpsi=0
audpsi=0 aARc:0 ARc:0 ARc:0 ARc:0 ARc:0
audpsi=0 nARc:0 ARc:0 ARc:ctu=1 ARc:0
audpsi=0 dARc:0 ARc:ctu=1
audpsi=0
audpsi=0 iARc:0 ARc:0 ARc:0 ARc:0 ARc:0 ARc:0
audpsi=0

There must have been two instances of initial "Y" in auditory memory, for us to see two diagnostic messages. Or maybe they were just general instances of "Y". The words "YES" and "YOU" in the EnBoot sequence have initial "Y", and the words "WHY" and "THEY" have non-initial "Y".

Maybe we should start (or resume?) putting commented-out diagnostic tools inside the crucial AudRecog module, so that we may quickly troubleshoot any future problems.

When we knock out AudDamp temporarily in order to see what activations are building up on auditory engrams during the recognition of input "you", we see the following differential build-up on the EnBoot engram of YOU.


74 Y 0 # 1 1 0
75 O 8 # 0 1 0
76 U 10 # 0 1 0
77   10 # 0 0 56

That record shows a good, healthy build-up.

10. Sun.17.MAY.2009 -- REVERTING TO THE OLD EnBoot

It is proving too hard to get the auditory memory to include ASCII pho=32 spaces as the final element in each English word. Therefore we are abandoning the code of last night and today and we are reverting to the 15may09B.F cleaned-up version.

The "ctu" value is rather sacred, because it plays a central role in the recognition of a word as an "audpsi" concept. Whether we enjoy it or not, we will have to do some retroactive resetting of "ctu".

For the setting of "ctu" in current circumstances, the most important thing is that a final "S" comes in, as shown by a terminating pho=32. Therefore, without relying on "wordend", we should simply trap for "S" and for pho=32. When pho is 32, we should see if the "prepho" is ASCII S-83. So we need to have prepho available. Actually, we need a system that keeps track of three elements: the current pho=32; the previous "S"; and the element before "S". Or do we? We need to see that it is at least positive.

11. Sun.17.MAY.2009 -- SOLVING THE SINGULAR NOUN-STEM PROBLEM

After seven days of arduous AI coding, we seem finally to have solved the problem of getting the AI to accept a plural English noun ending in "s" while assigning the concept number to the singular stem. We used the following AudInput test-code in the area that deals with pov=42 external input, not with internal reentry, because new words are not learned during the reentry of thoughts.


      \ 17may2209 Testing for SP-32 or CR-13:
      pho @ 32 = pho @ 13 = OR IF  \ 17may2009
        pho @ 13 = IF 10 EMIT THEN  \ 17may2009
        ." AIptExtPho=" pho @ .    \ 17may2009 test
        ."  AIptExtPrepho=" prepho @ .    \ 17may2009 test
        prepho @ 83 = IF   \ 17may2009 If previous pho 
was "S"
          \ 17may2009 In next line, time t may not have 
advanced.
          0 t @ 1 - 4 aud{ !  \ 17may2009 set ctu=0 
before "S" end.
          0 prepho !  \ 17may2009 Zero out prepho after use.
        THEN  \ 17may2009 End of test for external final "S"
      THEN  \ 17may2009 End of test for external space-bar 
32.

http://www.scn.org/~mentifex/mindforth.txt

MindForth Programming Journal - sun10may2009

1. Sun.10.MAY2009 -- RESTORING KbTraversal FUNCTIONALITY

Yesterday in 9may09A.F we further de-globalized the psi- group of variables by reclaiming "oldpsi" from the SpreadAct module for use in the OldConcept module. We created "cogpsi" for use in SpreadAct. A lot of our compromised functionality returned when we had deglobalized OldConcept, but KbTraversal stopped working, so today we need to troubleshoot KbTraversal.

We went into KbTraversal and we used "nacpsi" instead of "psi" just before calling NounAct. Thus we restored the functionality of KbTraversal.

2. Sun.10.MAY.2009 -- RESTORING who-are-you FUNCTIONALITY

When we delve back into the who-are-you problem, we discover from the following .psi data that we have lost the ability of the AI to answer a who-are-you query.

23apr09A.F:
207 : 55 13 0 0 0 5 67 55 to WHO
211 : 67 16 0 55 55 8 50 67 to ARE
215 : 50 11 0 67 67 7 67 50 to I
217 : 50 11 0 50 0 7 57 50 to I
220 : 57 15 0 50 50 8 50 57 to AM
222 : 50 36 0 57 57 7 0 50 to I
time: psi act num jux pre pos seq enx

10may09A.F:
207 : 55 13 0 0 72 5 67 55 to WHO
211 : 67 15 0 55 55 8 50 67 to ARE
215 : 50 58 0 67 55 7 67 56 to YOU
219 : 56 55 0 50 0 7 67 56 to YOU
223 : 67 15 0 56 56 8 56 67 to ARE
227 : 56 36 0 67 56 7 0 56 to YOU
time: psi act num jux pre pos seq enx

It turns out that one line of code in the block below was taking away the who-are-you functionality. By erroneously using "audpsi" as the source of the transfer-to- English "enx" value, we were nullifying the POV-based decisions from the immediately preceding code. When we abandoned "audpsi" as the source of "enx" and used "oldpsi" instead, the functionality of the who-are-you feature quasi-magically was restored.

\ The Robot Mind as a seed AI for Technological Singularity
\ approaches artificial consciousness in the following code:
\ pov @ 35 = IF fex @ psi ! THEN \ during internal (#) "pov";
pov @ 35 = IF fex @ oldpsi ! THEN \ 9may2009 during internal (#) "pov";
\ pov @ 42 = IF fin @ psi ! THEN \ external (*) "pov"
pov @ 42 = IF fin @ oldpsi ! THEN \ 9may2009 external (*) "pov"
\ psi @ enx ! \ Assume Psi number = En(glish) number.
\ audpsi @ enx ! \ 9may2009 Assume audpsi number = En (glish) number.
oldpsi @ enx ! \ 10may2009 Assume oldpsi number = En(glish) number.

Below we see that the input of "you" is properly interpreted as a self-referential "I" in the AI Mind. Thus the AI is able to answer the who-are-you query with an "I AM" statement.

191 : 50 11 0 0 50 7 75 50 to I
196 : 75 35 0 0 50 8 72 75 to HELP
201 : 72 0 2 0 50 5 0 72 to KIDS
207 : 55 13 0 0 0 5 67 55 to WHO
211 : 67 13 0 55 55 8 50 67 to ARE
215 : 50 11 0 67 55 7 67 50 to I
217 : 50 11 0 50 0 7 57 50 to I
220 : 57 15 0 50 50 8 50 57 to AM
222 : 50 36 0 57 50 7 0 50 to I

We were afraid that we might have to do some deep troubleshooting of the assignment of "fin" and "fex" and "enx" tags. Luckily, a cursory inspection of the recent changes in the OldConcept code gave us an idea of what to try, and it worked.

3. Sun.10.MAY.2009 -- RESTORING AudRecog OF SINGULAR NOUN- STEMS

The following .psi report after entering "i know book" and "books teach people" shows that MindForth has regained the ability to detect a singular noun stem that was lost in the 6may09A.F version that started to de-globalize the variables. It was probably a problem in AudRecog, and the thorough de-globalizing of AudRecog made it necessay also to de-globalize OldConcept and some other mind-modules.

205 : 56 35 0 0 0 7 61 56 to YOU
210 : 61 0 0 56 56 8 76 61 to KNOW
215 : 76 10 0 61 56 5 0 76 to BOOK
220 : 76 10 0 76 0 5 66 76 to BOOK
225 : 54 0 0 76 76 5 0 54 to WHAT
228 : 66 0 2 54 54 8 76 66 to IS
233 : 76 10 0 66 54 5 0 76 to BOOK
239 : 76 10 2 76 0 5 77 76 to BOOKS
245 : 77 11 0 76 76 8 37 77 to TEACH
252 : 37 13 0 77 76 5 0 37 to PEOPLE
259 : 37 13 0 37 0 5 70 37 to PEOPLE
264 : 70 14 0 37 37 8 1 70 to HAVE
266 : 1 15 0 70 37 1 71 1 to A
271 : 71 36 1 1 37 5 0 71 to FEAR
time: psi act num jux pre pos seq enx

Although de-globalizing was accompanied by substantial grief and worry, it should be quite easier to troubleshoot the code that has been successfully de-globalized, because it is easier to pin down the operation of local variables that play out their effects in their own mind-module.

A.T. Murray
--
See also http://aimind- i.com
http://code.google .com/p/mindforth
http://agi- roadmap.org/Roadmap_Drafts
http://www.scn.org/~mentifex/mindforth.txt

Comparison of NL Comprehension Systems

MindForth as a natural-language (NL) comprehension system

In the MindForth artificial general intelligence (AGI) system, natural- language (NL) generation and comprehension are a two-way street. Just as MindForth generates a thought in English by letting spr eading activation link concept to concept to concept in a linguistic superstructure sprawling over the conceptual mindgrid, likewise NL comprehension in MindForth consists in laying down linkages from concept to concept to concept, so that the idea being comprehended is recoverable or regenerable at some future time when spreading activation follows a meandering chain of thought to the comprehended idea, or proposition, or assertion. Being still a primitive AGI, MindForth can comprehend only primitive natural language. In comparison with other NL comprehension systems, MindForth most likely stands out as being based on its own project-specific theory of mind which relies not on any ontological knowledge-base (KB) but rather on a conceptual knowledge-base as the substrate for both generation and comprehension. Other systems may generate responses to KB queries without actually generating a conceptual thought, and may therefore be incapable of comprehension for lack of conceptual underpinnings. To assess the capability of an AGI system in NL comprehension, one should look for the change in state that occurs before and after the input of the NL input to be comprehended. In MindForth, the input is integrated not only with the knowledge-base as a raw assertion, but may also be integrated in a broader sense as MindForth expands its ability to think recursively and inferentially about the raw assertions which it incorporates into its knowledge base.

A.T. Murray
--
http://opencog.org/wiki/Comparison_of_NL_Comprehen sion_Systems

33 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!