Older blog entries for mentifex (starting at number 104)

Perlmind Programming Journal (PMPJ)
Updating the Ghost Perl AI in conformance with MindForth AI.

Today we return to Perl AI coding after updating the MindForth code in July and August of 2016. In Forth we re-organized the calling of the subordinate mind-modules beneath the MainLoop module so as no longer to call the Think module directly, but rather to call the FreeWill module first so that eventually the FreeWill or Volition module will call Emotion and Think and Motorium.

We have discovered, however, that the MindForth code properly handles input which encounters a bug in the Perl code, so we must first debug the Perl code. When we enter, "you see dogs", MindForth properly answers "I SEE NOTHING", which is the default output for anything involving VisRecog since we have no robot camera eye attached to the Mind program. The old Perl Mind, however, incorrectly recognizes the input of "DOGS" as if it were a form of the #830 "DO" verb, and so we must correct the Perl code by making it as good as the Forth code. So we take the 335,790 bytes of ghost175.pl from from 2016-08-07 and we rename it as ghost176.pl for fresh coding.

We start debugging the Perl AudRecog module by inserting a diagnostic message to reveal the "$audpsi" value at the end of AudRecog. We learn that "DOGS" is misrecognized as "DO" when the input length reaches two characters. We know that MindForth does not misrecognize "DOGS", so we must determine where the Perl AudRecog algorithm diverges from the Forth algorithm. We are fortunate to be coding the AI in both Forth and Perl, so that in Perl we may implement what already works in Forth.

In Perl we try commenting out some AudRecog code that checks for a $monopsi. The AI still misrecognizes "DOGS" as the verb "DO". Next we try commenting out some Perl code that declares a $psibase when incoming word-length is only two. The AI still misrecognizes. Next we try commenting out a declaration of $subpsi. We still get misrecognition. We try commenting out another $psibase. Still misrecognition. We even try commenting out a major $audrec declaration, and we still get misrecognition. When we try commenting out a $prc declaration, AudRecog stops recognizing the verb "SEE". Then from MindForth we bring in a provisional $audrec, but the verb "SEE" is not being recognized.

Although in the MS-DOS CLI prompt we can evidently not run MindForth and the Perlmind simultanously, today we learn that we can run MindForth and leave the Win32Forth window open, then go back to running the Perl AI. Thus we can compare the diagnostic messages in both Forth and Perl so as to further debug the Perl AI. We notice that the Forth AudMem module sends a diagnostic message even for the blank space ASCII 32 even after "SEE", which the Perl AI does not do.

Strong AI Theory of Mind Considerations

We may need to add a tru tag to the conceptual flag-panel in the various AI Minds, such as in Forth and in Perl. Only the first word in the thought-engram will need a tru tag. We may want to have the following tags in the panel.

tru psi act hlc pos jux pre iob tkb seq num mfn dba rv

Active code will probably assign a numeric "true" value, so that only the most current thoughts will have an assumption of truth and believability. Preterite be-verb assertions like "Kilroy is here" shall have decayed down to a low tru value so that they will not be taken at face value by the thinking Mind. On the other hand, non-be-verb knowledge about the ontology of the world will need to be regarded as true.

As the thinking AI associates from thought to thought, sentence-engrams with a low truth-value should not come into play. Various criteria may cause some engrams to go to a mid-range truth-value and other engrams to a minimal truth-value, so that reliable knowledge may come into play.

The tru-tag will permit rather elaborate ideas to emerge back into consciousness with emphasis on special considerations such as the inclusion of a prepositional phrase in the idea, as in a sentence like, "A man with a boat needs money". The 3D AI will therefore need not only modules for thinking with prepositional phrases, but also modules for conjunctions to be used in sentences like, "I know that time is money" or "I think that boats cost money." The routines for comprehension will need to be modernized or adjusted to allow parts of a long input sentence to be comprehended upon selection of a likely subject.

To some extent, we are aiming for a conscious AI Mind that realizes that it lives inside a computer and that it has only limited interaction with the outside world. It may need the ASCII bell-function as a way of deliberately summoning the attention of a human user.

Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.

Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component
concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then
at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).

At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,

at t=2426, 707=YOU has a negative "-46" activation.

At t=2430, 820=SEE has a negative "-46" activation.

At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46"
points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while
we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of
thought of the AGI.

It was fun but nevertheless sincere to post AI Has Been Solved on April Fool's Day ten years ago. Mentifex Strong AI always was and always will be an extremely serious AI Lab Project as described in December of 1998 by the Association for Computing Machinery. Mentifex AI is so extremely serious that it has meanwhile been ported into Russian and into German. The resulting Amazon Kindle e-book, Artificial Intelligence in German, has been reviewed with the maximum highest-possible five-star rating. Another e-book, InFerence, describes how the Mentifex AI Minds can think by automated reasoning with logical inference. The MindForth AI prior art program has been cited in a Google patent. Now finally at http://ai.neocities.org/AiSteps.html a third-generation (3G) Mentifex AI Mind is being created in Perl, and Netizens from all over the world are looking into the use of Unicode and Perl to create artificial intelligence in any programming language and in any natural human language. Ladies and gentlemen, start your AI engines.


Artificial Intelligence in German (Amazon Kindle e-book)

If your humanoid robot needs an AI Mind to think in English or German, a new Amazon Kindle e-book goes into great detail about robotic thought processes.



This e-book in English about AI in German (and English and Russian) contains the entire AI source code in Forth, which causes most of the editorial portion of the e-book (18 of 20 chapters) to be readable without charge in the free preview.



InFerence
for Robot Artificial Intelligence (Mind-Module)

is now an Amazon Kindle e-book with a "Click to LOOK INSIDE!" free preview so that programmers and AI enthusiasts who may not have a credit card can get the gist of the information free from the product description and the first few chapters of the free preview. InFerence is available across the World Wide Web in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom and USA America. So far the robot AI e-book has been reviewed with four stars out of five. The robot AI software is free to download in English, German and Russian.



64-bit Supercomputer Forth Chips for Strong AI

Imagine a four-core, 64-bit Forth AI CPU designed to run a not-quite-maspar but still somewhat parallel artificial intelligence in English http://www.scn.org/~mentifex/mindforth.txt or in http://www.scn.org/~mentifex/DeKi.txt German.

Such a specialized, Strong AI Forth CPU could devote one core to visual processing and memory; a second core to auditory input and memory; a third core to robotic motor memory and output; and a fourth core to automated reasoning with http://code.google.com/p/mindforth/wiki/InFerence in English, German or Russian.

The 64-bit Forth CPU could be architecturally simple by dint of leaving out all the customary circuitry used for floating-point arithmetic, and Forth would serve as its own AI operating system.

JavaScript Artificial Intelligence Programming Journal

Wed.3.APR.2013 -- "nounlock" May Not Need Parameters

In the English JSAI (JavaScript artificial intelligence), the "nounlock" variable holds onto the time-point of the direct object or predicate nominative for a specific verb. Since the auditory engram being fetched is already in the proper case, there may not be any need to specify any parameters during the search.

Fri.5.APR.2013 -- Orchestrating Flags in NounPhrase

As we run the English JSAI at length without human input and with the inclusion of diagnostic "alert" messages, we discover that the JSAI is sending a positive "dirobj" flag into NounPhrase without checking first for a positive "predflag".

Sat.6.APR.2013 -- Abandoning Obsolete Number Code

Yesterday we commented out NounPhrase code which was supposed to "make sure of agreement; 18may2011" but which was doing more harm than good. The code was causing the AI to send the wrong form of the self-concept "701=I" into the SpeechAct module. Now we can comment out our diagnostic "alert" messages and see if the free AI source code is stable enough for an upload to the Web. Yes, it is.

German Artificial Intelligence Programming Journal

Thurs.14.MAR.2013 -- Seeking Confirmation of Inference

In the German Wotan artificial intelligence with machine reasoning by inference, the AskUser module converts an otherwise silent inference into a yes-or-no question seeking confirmation of the inference with a yes-answer or refutation of the inference with a no-answer. Prior to confirmation or refutation, the conceptual engrams of the question are a mere proposition for consideration by the human user. When the user enters the answer, the KbRetro module must either establish associative tags from subject to verb to direct object in the case of a yes-answer, or disrupt the same tags with the insertion of a negational concept of "NICHT" for the idea known as "NOT" in English.

Fri.15.MAR.2013 -- Setting Parameters Properly

Although the AskUser module is asking the proper question, "HAT EVA EIN KIND" in German for "Does Eva have a child?", the concepts of the question are not being stored properly in the Psi conceptual array.

Sat.16.MAR.2013 -- Machine Learning by Inference

Now we have coordinated the operation of InFerence, AskUser and KbRetro. When we input, "eva ist eine frau" for "Eva is a woman," the German AI makes a silent inference that Eva may perhaps have a child. AskUser outputs the question, "HAT EVA EIN KIND" for "Does Eva have a child?" When we answer "nein" in German for English "no", the KbRetro module adjusts the knowledge base (KB) retroactively by negating the verb "HAT" and the German AI says, "EVA HAT NICHT EIN KIND", or "Eva does not have a child" in English.

95 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!