mentifex is currently certified at Master level.

Name: AT Murray
Member since: 2004-03-14 16:21:14
Last Login: 2016-07-14 03:07:31

FOAF RDF Share This

Homepage: http://github.com/PriorArt/AGI

Notes:

The MindForth project has evolved into the Ghost Perl Webserver Strong AI.

Articles Posted by mentifex

Recent blog entries by mentifex

Syndication: RSS 2.0
Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component
concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then
at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).

At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,

at t=2426, 707=YOU has a negative "-46" activation.

At t=2430, 820=SEE has a negative "-46" activation.

At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46"
points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while
we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of
thought of the AGI.

It was fun but nevertheless sincere to post AI Has Been Solved on April Fool's Day ten years ago. Mentifex Strong AI always was and always will be an extremely serious AI Lab Project as described in December of 1998 by the Association for Computing Machinery. Mentifex AI is so extremely serious that it has meanwhile been ported into Russian and into German. The resulting Amazon Kindle e-book, Artificial Intelligence in German, has been reviewed with the maximum highest-possible five-star rating. Another e-book, InFerence, describes how the Mentifex AI Minds can think by automated reasoning with logical inference. The MindForth AI prior art program has been cited in a Google patent. Now finally at http://ai.neocities.org/AiSteps.html a third-generation (3G) Mentifex AI Mind is being created in Perl, and Netizens from all over the world are looking into the use of Unicode and Perl to create artificial intelligence in any programming language and in any natural human language. Ladies and gentlemen, start your AI engines.


Artificial Intelligence in German (Amazon Kindle e-book)

If your humanoid robot needs an AI Mind to think in English or German, a new Amazon Kindle e-book goes into great detail about robotic thought processes.



This e-book in English about AI in German (and English and Russian) contains the entire AI source code in Forth, which causes most of the editorial portion of the e-book (18 of 20 chapters) to be readable without charge in the free preview.



InFerence
for Robot Artificial Intelligence (Mind-Module)

is now an Amazon Kindle e-book with a "Click to LOOK INSIDE!" free preview so that programmers and AI enthusiasts who may not have a credit card can get the gist of the information free from the product description and the first few chapters of the free preview. InFerence is available across the World Wide Web in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom and USA America. So far the robot AI e-book has been reviewed with four stars out of five. The robot AI software is free to download in English, German and Russian.



64-bit Supercomputer Forth Chips for Strong AI

Imagine a four-core, 64-bit Forth AI CPU designed to run a not-quite-maspar but still somewhat parallel artificial intelligence in English http://www.scn.org/~mentifex/mindforth.txt or in http://www.scn.org/~mentifex/DeKi.txt German.

Such a specialized, Strong AI Forth CPU could devote one core to visual processing and memory; a second core to auditory input and memory; a third core to robotic motor memory and output; and a fourth core to automated reasoning with http://code.google.com/p/mindforth/wiki/InFerence in English, German or Russian.

The 64-bit Forth CPU could be architecturally simple by dint of leaving out all the customary circuitry used for floating-point arithmetic, and Forth would serve as its own AI operating system.

97 older entries...

 

mentifex certified others as follows:

  • mentifex certified Akira as Apprentice
  • mentifex certified berend as Apprentice
  • mentifex certified scrottie as Apprentice
  • mentifex certified haruspex as Apprentice
  • mentifex certified badvogato as Master
  • mentifex certified sye as Journeyer
  • mentifex certified async as Apprentice
  • mentifex certified bratsche as Master
  • mentifex certified wspace as Journeyer
  • mentifex certified mirwin as Master
  • mentifex certified salmoni as Master
  • mentifex certified garym as Master
  • mentifex certified proclus as Master
  • mentifex certified bi as Journeyer
  • mentifex certified bkode as Master
  • mentifex certified lispmeister as Master
  • mentifex certified schugo as Journeyer
  • mentifex certified timbl as Master
  • mentifex certified lkcl as Master
  • mentifex certified chalst as Master
  • mentifex certified chromatic as Master

Others have certified mentifex as follows:

  • wspace certified mentifex as Journeyer
  • mirwin certified mentifex as Master
  • housel certified mentifex as Apprentice
  • garym certified mentifex as Master
  • dlc certified mentifex as Journeyer
  • badvogato certified mentifex as Journeyer
  • grant certified mentifex as Apprentice
  • schugo certified mentifex as Journeyer
  • lkcl certified mentifex as Master
  • etbe certified mentifex as Apprentice
  • chalst certified mentifex as Apprentice

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page