mentifex is currently certified at Master level.

Name: AT Murray
Member since: 2004-03-14 16:21:14
Last Login: 2016-08-27 15:21:20

FOAF RDF Share This

Homepage: http://ai.neocities.org/SOTA.html

Notes:

With a list of projects in Artificial General Intelligence (AGI) and with news on the AGI state of the art (SOTA) at http://ai.neocities.org/SOTA.html Mentifex advises AGI thought-leaders and AGI Project heads to pick challenging mind-modules from Prior Art AGI and to assign further development to the most suitable AGI team-member for each specific AGI mind-module.

Articles Posted by mentifex

Recent blog entries by mentifex

Syndication: RSS 2.0
Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.

Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component
concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then
at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).

At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,

at t=2426, 707=YOU has a negative "-46" activation.

At t=2430, 820=SEE has a negative "-46" activation.

At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46"
points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while
we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of
thought of the AGI.

It was fun but nevertheless sincere to post AI Has Been Solved on April Fool's Day ten years ago. Mentifex Strong AI always was and always will be an extremely serious AI Lab Project as described in December of 1998 by the Association for Computing Machinery. Mentifex AI is so extremely serious that it has meanwhile been ported into Russian and into German. The resulting Amazon Kindle e-book, Artificial Intelligence in German, has been reviewed with the maximum highest-possible five-star rating. Another e-book, InFerence, describes how the Mentifex AI Minds can think by automated reasoning with logical inference. The MindForth AI prior art program has been cited in a Google patent. Now finally at http://ai.neocities.org/AiSteps.html a third-generation (3G) Mentifex AI Mind is being created in Perl, and Netizens from all over the world are looking into the use of Unicode and Perl to create artificial intelligence in any programming language and in any natural human language. Ladies and gentlemen, start your AI engines.


Artificial Intelligence in German (Amazon Kindle e-book)

If your humanoid robot needs an AI Mind to think in English or German, a new Amazon Kindle e-book goes into great detail about robotic thought processes.



This e-book in English about AI in German (and English and Russian) contains the entire AI source code in Forth, which causes most of the editorial portion of the e-book (18 of 20 chapters) to be readable without charge in the free preview.



InFerence
for Robot Artificial Intelligence (Mind-Module)

is now an Amazon Kindle e-book with a "Click to LOOK INSIDE!" free preview so that programmers and AI enthusiasts who may not have a credit card can get the gist of the information free from the product description and the first few chapters of the free preview. InFerence is available across the World Wide Web in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom and USA America. So far the robot AI e-book has been reviewed with four stars out of five. The robot AI software is free to download in English, German and Russian.



98 older entries...

 

mentifex certified others as follows:

  • mentifex certified Akira as Apprentice
  • mentifex certified berend as Apprentice
  • mentifex certified scrottie as Apprentice
  • mentifex certified haruspex as Apprentice
  • mentifex certified badvogato as Master
  • mentifex certified sye as Journeyer
  • mentifex certified async as Apprentice
  • mentifex certified bratsche as Master
  • mentifex certified wspace as Journeyer
  • mentifex certified mirwin as Master
  • mentifex certified salmoni as Master
  • mentifex certified garym as Master
  • mentifex certified proclus as Master
  • mentifex certified bi as Journeyer
  • mentifex certified bkode as Master
  • mentifex certified lispmeister as Master
  • mentifex certified schugo as Journeyer
  • mentifex certified timbl as Master
  • mentifex certified lkcl as Master
  • mentifex certified chalst as Master
  • mentifex certified chromatic as Master

Others have certified mentifex as follows:

  • wspace certified mentifex as Journeyer
  • mirwin certified mentifex as Master
  • housel certified mentifex as Apprentice
  • garym certified mentifex as Master
  • dlc certified mentifex as Journeyer
  • badvogato certified mentifex as Journeyer
  • grant certified mentifex as Apprentice
  • schugo certified mentifex as Journeyer
  • lkcl certified mentifex as Master
  • etbe certified mentifex as Apprentice
  • chalst certified mentifex as Apprentice

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page