Wikipedia-based Open-Source Artificial Intelligence

Posted 11 Sep 2007 at 09:09 UTC (updated 21 Sep 2007 at 03:14 UTC) by mentifex Share This

Abstract: Wikipedia has grown so large that it may serve first as the referential background for open-source artificial intelligence (AI) and then as food for thought when the emerging AI Minds try to know and understand the world around them.

One problem in conducting an open-source AI project is bringing new cadres of AI developers up to speed with the knowledge and expertise necessary to work on AI.

AI is so difficult that many Netizens consider it as a Grand Challenge to humanity. The concepts and terminology involved are so abstruse that even AI experts have difficulty in comprehending and evaluating the properties and merits of any AI project that has evolved beyond the borders of traditional "Good Old-Fashioned Artificial Intelligence" (GOFAI). Even AI grad students have difficulty in understanding the multidisciplinary concepts involved in AI, and non-English-speakers have difficulty in penetrating the language barrier of highly technical AI documentation papers. Depending on the skills of the newcomer to an AI project, the barriers to entry are seemingly elastic -- diminishing if the arriviste is an expert, or expanding if the new recruit is a rank amateur with or without English- language skills. If only there were a massive reservoir of knowledge in all possible fields pertinent to AI, a kind of gigantic encyclopedia constantly expanding to assimilate the burgeoning frontiers of knowledge in AI and in all other fields even remotely connected to AI.

Well, Wikipedia is such a made-to-order encyclopedia -- free for the websurfing, and constantly expanding even as the realm of human knowledge expands. The only question is, how does one piggyback an open-source AI project on top of Wikipedia?

For a modular AI project, as opposed to a monolithic AI project, the answer is to document each AI mind-module with an apparatus of highly germane links into Wikipedia. For each module, such as the speech output module, there will be links to various Wikipedia articles on speech, like the following.

AI experts evaluating a particular project may encounter such a link- cluster and bypass it after cursory inspection, but AI newcomers may visit Wikipedia to study up on the background knowledge pertinent to any given AI mind-module. Thus the symbiosis between open-source AI and Wikipedia is many things to many people -- unnecessary fluff to an AI expert; useful references for a midway-experienced AI programmer; or a complete course in AI grounding for the new recruit to open-source AI.

All AI textbooks heretofore published are obsolete.

AI textbooks published on dead-tree paper can not possibly keep up with Wikipedia as the evolving, mutating, metastasizing textbook of not only AI but of any other discipline whose adherents have the skill and motivation to index from the chosen field into Wikipedia. Each Wikipedia article is like a living organism, sometimes splitting in two, and sometimes spawning child articles with a life of their own. Once an AI project guru maps the existing set of AI-related Wikipedia articles onto the existing webpages of AI project documentation, a kind of symbiosis ensues where the state of the art in AI must keep in touch with the state of the Wikipedia knowledge base.

Recruiting for open-source AI among the genius Wikipedia editors.

Let's face it. Geniuses and the elite of computer experts are editing Wikipedia. Once they stumble upon Wikipedia, they devote thousands of hours to expanding the Wikipedia knowledge base. They train themselves to work together and hammer out a consensus agreement on any topic.
Now consider what will happen (not immediately, but over time) if many people who previously never heard of Wikipedia discover the free on- line encyclopedia only as a result of getting deeply immersed in an open-source AI project. Many of the generally genius-level AI devotees will instantly be drawn to the magic of Wikipedia and will become editors of Wikipedia. Then a two-way flow of information will occur. AI devotees will learn their AI background information from Wikipedia, and Wikipedia editors will seize the obvious tide in the affairs of men by choosing to become open-source AI developers.

AI Minds born of Wikipedia will devour Wikipedia.

Wikipedia may turn out to be the alpha and the omega of AI. In the beginning, Wikipedia serves to educate the masses of untrained and unskilled wannabe AI developers. In the process, Wikipedia editors who dabble in AI will create a machine intelligence capable of understanding the content of Wikipedia. Since March 8, 2007, there has been on SourceForge a project at http://sourceforge.net/projects/dbpedia for "querying Wikipedia like a database" and http://dbpedia.org explains that "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia and to link other datasets on the Web to Wikipedia data."

Eye has not seen, and ear has not heard, of the spectacular and mind- bending uses to which Wikipedia and its derivative endeavors will be put in the course of the approaching Technological Singularity.


Artificial Intelligence derived from... Natural Stupidity?, posted 11 Sep 2007 at 12:44 UTC by bi » (Journeyer)

Even without going into the big question of whether the AI's going to work in the first place, I still have one question:

If this oh-so-powerful AI project learns from Wikipedia, won't it also learn various wondrous behaviours such as vandalizing wiki pages, launching personal attacks, and claiming fake academic credentials? What's the great idea about training an artificial intelligence from a reservoir of natural stupidity?

Learning by automated posting, posted 11 Sep 2007 at 18:20 UTC by ncm » (Master)

If Wikipedia serves as the AI's knowledge base, then every time the AI learns something it should post an edit to Wikipedia. Like the rest of us, it will suffer from idiotic, random alterations to its new memories, not to mention outright deletions and explicit fabrications.

On another topic, I hope the AI doesn't forget to take its meds.

Sigh, posted 13 Sep 2007 at 14:58 UTC by fzort » (Journeyer)

Having a crank of this magnitude certified as Master and posting articles is an embarrassment for this site.

Re: Sigh, posted 13 Sep 2007 at 16:55 UTC by redi » (Master)

You might only need to persuade a few well-chosen people to remove some of their certs to remedy that. Only two people have certified mentifex as Master, pick the right places in the graph and make one or two surgical incisions ...

Re: Sigh, posted 13 Sep 2007 at 22:38 UTC by ncm » (Master)

Good luck. I've tried to get myself de-certified, because I don't, in fact, spend full time improving Free Software. (Sure, I use it, but who doesn't?) Probably the right thing would be for robogato to demote some of the original seeds. Even that might not help much. Blocking the flow from anybody who hasn't logged in in three years might help more.

Re: Sigh, posted 14 Sep 2007 at 01:14 UTC by StevenRainwater » (Master)

I'm working on some code for expiring outbound certs of inactive users. However, it won't affect mentifex at all. Both his master certs come from active users. You could either convince garym and mirwin to remove or lower their master certs, or you could look upstream for a way of lowering the cert level of garym and mirwin.

Another possibility is adding some type of article approval system. One thing Advogato has proved over and over is that even correctly rated masters are not necessarily capable of producing useful, on-topic articles. I've been thinking about a set up where any certified user can post an article but the article appears in a sort of article preview area where it's only visible to other certified users. Users can then give it some sort of simple thumbs up/down rating and, if it accumulates a high enough positive score, it is automatically moved from the preview area to the front page where it becomes visible to the whole world.

Speaking of off topic, posted 16 Sep 2007 at 14:46 UTC by mirwin » (Master)

Perhaps an open AI project at www.wikiversity.org and some open AI references and texts at www.wikibooks.org would be a way for Mentifex to engage some other people interested in AI without finding himself subjected to periodic attacks. Nothing in this topic title led me to believe that this discussion would regard how I certify people or whether my own certification would be automatically delisted if I fail to participate on someone else's timetable.

Re: Speaking of off topic, posted 16 Sep 2007 at 17:11 UTC by bi » (Journeyer)

mirwin: Contrary to popular belief, Galileo wasn't burnt at the stake. Also, "they" didn't laugh at Einstein.

Perhaps the AIs should start with easy subsets, posted 17 Sep 2007 at 05:22 UTC by mirwin » (Master)

Perhaps the training of the prototype AIs could start with easy subsets of free online wikis and work up just like human children are expected to do.

http://www.soschildrensvillages.org.uk/charity-news/education-cd.htm

Eventually there will be large numbers of self scored quizzes and tutorials at Wikiversity so perhaps AI training would proceed better with selected materials web accessible there rather than Wikipedia.

http://en.wikiversity.org/wiki/Self_Paced_Reading_Labs

Also since Wikiversity is an educational environment setup for learners in groups, teams or as individuals it may be more useful than an Encyclopedia environment for AI training at all levels.

Some interesting Turing test potential within learning groups at Wikiversity might exist or be created whereas teams of Encyclopedia article editors generally value what you already know or can produce reliably. Some of the better qualified editors get a bit impatient with even tiny quantities of typical human stupidity. Bots are generally expected to be well behaved and not harass, irritate or hinder human activities at Wikipedia.

Re: Speaking of off topic, posted 17 Sep 2007 at 08:54 UTC by redi » (Master)

mirwin, have you not noticed how those "periodic attacks" coincide exactly with the posting of articles by mentifex. Funny that.

Could it be that some consider his articles to be the rantings of someone well-known for dumping self-promotional puff-pieces all over the net? Is C:\Windows\Desktop\Mind.html worthy of an article?

As you suggest, he might want to find somewhere his ideas are better received.

erm..., posted 20 Sep 2007 at 10:42 UTC by Chicago » (Journeyer)

For a start I would like to remind people of http://www.nothingisreal.com/mentifex_faq.html

Following this I am a little concerned that we're discussing trying to break his "master" qualification. I agree that he dosn't support the community by ever replying or opening up discussion which might be valid.

I *am* interested in his idea, and certainly am more interested in discussion about an AI using Wikipedia as its base of information. Bi raises a very interesting point though - that because it contains "bad things" what is the possibility that the AI might turn out "bad", i.e. self serving, evil and annoying.

The trust system of Advogato has given him his rating. You're now discussing ways to break that system because you disagree that he should have that rating. Was not Advogato supposed to be a bastion of free-speach? And as such you have to live with the negative side effects that that has.

Interestingly - Is not mentifix truly the only person who fully understands his theory, whether or not that theory be correct, or accurate? - to be a Master, of what width of [accurate] knowledge must you have? the IET guidance to be a CEng suggests that you have to have a wide broad range of knowledge and have an area of knowledge which you understand to a high degree of "depth".

Please now argue as to how to measure this shape of knowledge...

Re: erm..., posted 20 Sep 2007 at 12:33 UTC by redi » (Master)

You're now discussing ways to break that system because you disagree that he should have that rating.

Actually, I was suggesting how to use the system if you wanted to remove the rating. I don't know where breaking it comes into the picture. I'll live with it, but I was pointing out that if you don't like his rating the way to do something about it is to identify where his trust flows from.

w.r.t what Master means, please read the Advogato certification guidelines, not the IET guidance for a CEng. The relevant part is: an "important" free software project, i.e. one that many people depend on, or one that stands out in quality

another offtopic by me. Bad child bad!, posted 20 Sep 2007 at 14:53 UTC by Chicago » (Journeyer)

(I'm not actually saying mentifix should be a master - if I thought that I would have certified him as such....)

---

In your instance you where talking about putting social pressure on individuals to re-certify. The system is more then just the code that runs Advogato, its the data thats in it already, and it could even also be said that its peoples perceptions of what different ranks are are part of that system.

---

I'm not saying that the Master is equal to a CEng - and certainly I think that the Advogato certificate guidelines are ... guidelines. I mean the next line is that they have to be "an excellent programmer". This line in itself might rule out a thousand visionaries who have a lot to contribute to Open Source / Free Software, indeed in my opinion what we are missing in this community are non-programmers - I think some of the more interesting articles and posts I've seen on Advogato have been about the production of an Arabic distribution of Linux, involving a large amount of internationalization work. I've also been subjected to a lot of information organizations representing disabled people about how to better write software that will be more-usable by the minority groups that it effects without incurring large costs for them at a later point).

Then you have issues with people who have full time jobs earning money writing non-free software who, in their spare time, work on free software or similar. They might not be part of a large, OS/Free project and almost certainly may struggle to be the lead developer of one, but the advice that they can give and the quality of that advice can be of Master quality- as I say, I'm just pointing out these are Guidelines.

The similarities to the role of a Master and that of the CEng scheme are very similar - both are supposed to be a very high level of technical ability along side a wide base of knowledge and both ideally are supposed to be providing something to the community. When you certify someone, surely you should also take into account their Academic record and professional qualifications (but not hold it against anyone who doesn't have either).

What I'm suggesting here (or attempting to) is that it isn't well defined on how to measure this ability, knowledge or involvement in the community, and the CEng, being in my eyes a similar level of award is much better documented and contains more guidance of how to achieve that "rank", and :. possibly a discussion point about how to measure this ability or how to qualify people for the work that they do.

---

at the risk of deviating more from the topic:

Interestingly, something I didnt note before, is that StevenRainwater suggested that he was working on ignoring certifications of users inactive, but the certification guidelines suggest that we should only certify based on the last years work - Should certifications time out irrelevant of whether the certifier is active or not, to force people to recertify based upon the last year (admittedly this would require people to be more active in their certifications and would also mean inactive people have to start from scratch again).

And..., posted 20 Sep 2007 at 23:29 UTC by salmoni » (Master)

So Arthur, you're saying that an effective artificial intelligence system could use wikipedia as its knowledge base? This doesn't differ much from corpus based attempts at AI (sorry to bang on, but read about LSA and similar attempts - maybe bi [who knows more than me] could point you in the right direction).

Secondly, you're saying that a system of AI could use wikipedia? That AI would be achieved by harnessing the power of this knowledge? That the contributions of various people could make an equivalent of human cognition?

Lots of knowledge is in wikipedia (of varying authority, most of which seems reasonable enough if not always factually superb - we can live our lives with it). A good AI system can only reach the potential of the knowledge it is provided with, admitted; but the whole point of AI is making a system that can effectively manipulate this data and turn it into useful knowledge. Just typing in "Star Trek" and getting a list of pages related to Star Trek is not AI - it's just a basic information search. You need to produce the actual system that mimics intelligence. That's the difficult part that's been stumping some very talented researchers for decades: acquiring a basis of knowledge in the first place is not hard.

Besides, you haven't addressed how this system is supposed to learn independently of its source. Will it ever be able to form its own opinions about (say) the nature of Mr Spock's ears?

Also there is the problem of grounding. It's much like the conversation in "Good Will Hunting" (sorry for quoting the entire line: it's one of my favourites):

"So if I asked you about art, you'd probably give me the skinny on every art book ever written. Michelangelo, you know a lot about him. Life's work, political aspirations, him and the pope, sexual orientations, the whole works, right? But I'll bet you can't tell me what it smells like in the Sistine Chapel. You've never actually stood there and looked up at that beautiful ceiling; seen that. If I ask you about women, you'd probably give me a syllabus about your personal favorites. You may have even been laid a few times. But you can't tell me what it feels like to wake up next to a woman and feel truly happy. You're a tough kid. And I'd ask you about war, you'd probably throw Shakespeare at me, right, "once more unto the breach dear friends." But you've never been near one. You've never held your best friend's head in your lap, watch him gasp his last breath looking to you for help. I'd ask you about love, you'd probably quote me a sonnet. But you've never looked at a woman and been totally vulnerable. Known someone that could level you with her eyes, feeling like God put an angel on earth just for you. Who could rescue you from the depths of hell. And you wouldn't know what it's like to be her angel, to have that love for her, be there forever, through anything, through cancer. And you wouldn't know about sleeping sitting up in the hospital room for two months, holding her hand, because the doctors could see in your eyes, that the terms "visiting hours" don't apply to you. You don't know about real loss, 'cause it only occurs when you've loved something more than you love yourself. And I doubt you've ever dared to love anybody that much. And look at you... I don't see an intelligent, confident man... I see a cocky, scared shitless kid. But you're a genius Will. No one denies that. No one could possibly understand the depths of you. But you presume to know everything about me because you saw a painting of mine, and you ripped my fucking life apart. You're an orphan right?"

This is the problem of grounding: Will has none. He has abstract knowledge that can be quoted and recycled, but in itself is meaningless: it's composed of facts that do not relate to anything. There is nothing in terms of meaning in anything he says. It's just a sophisticated form of repetition.

I know this sounds very humanist(ic), but it's an important part of AI and psychology: how does a system of abstract representation "ground" itself sufficiently to make meaning in terms useful to a human or at least unto itself? Without that, there is no intelligence.

Salmoni wrote..., posted 21 Sep 2007 at 04:13 UTC by mentifex » (Master)

So Arthur, you're saying that an effective artificial intelligence system could use wikipedia as its knowledge base?

and Salmoni also wrote

Secondly, you're saying that a system of AI could use wikipedia? That AI would be achieved by harnessing the power of this knowledge? That the contributions of various people could make an equivalent of human cognition?

You are presenting above only the aftermath of my original claim, not its basic point of departure, which I will elaborate on right now for the sake of clarity. First, a little background. From 1-9 September 2007 I was stuck somewhere with Internet access and a lot of time on my hands. I used the time to speed up a Wikipedia-based update of my SourceForge mind-module webpages, about forty of them, such as Security mind- module for artificial intelligence.

Gradually I realized my main idea here, not the concluding, secondary idea of teaching the AI Mind by feeding Wikipedia to it, but rather the primary idea -- that Wikipedia can teach human beings the vast and varied bacground info that they need for the purpose of constructing AI Minds. I spent years and decades gathering up that background infomation on my own, and now I see it all coming out in even greater abundance on Wikipedia. Now, the Mentifex SourceForge AI pages get hits from all over the world, especially from India and other parts of Asia. Well, as of Sun.16.SEP.2007 when I simultanously updated thirty-nine mind-module webpages, all visitors from anywhere in the world will see topmost a Mentifex ASCII mind-diagram followed immediately by a Wikipedia link-cluster of articles that pertain to each particular AI mind-module. An Advogatite might object that such a compilation of Wikipedia links is easy to compile, and I agree, but I also insist that it takes some knowledge of what to look for and some simple patience in spending the time to collect the AI links. Oh, by the way, what is going on with the following?

http://www.wikimindma p.org/viewmap.php?wiki=en.wikipedia.org&topic=Strong+AI+vs.+Weak+AI

A fellow named David Williams on the Artificial General Intelligence (AGI) mailing list wrote yesterday that "AGI and related wikipedia articles need editing and/or sourcing" and that using the above link will "map the cloud of articles related, most of them needing expert help." Advogato members are welcome to try it out.

Anyway, with the forty Wikipedia link-clusters, now so prominently displayed, Mentifex AI is now immensely more accessible not only to AI experts but also to AI newbies if they have access to Wikipedia. And I have only made my first run through the Wikipedia AI links. I have collected many more links that need to go into the second iteration.

If I were to stop posting tomorrow on the last day of the summer of 2007, or get run over by a truck, in either case so that Mentifex here no longer pushed his own AI project at every opportunity, imagine what might happen over the next ten years or so. High school kids, college students, whoever looked at Mentifex AI, might gradually soak up all that background Wikipedia information and might "pick up the ball" and carry Mentifex AI on further to a more successful conclusion. You see, there is already a certain degree of success, because the JavaScript AI Mind already exhibits primitive, rudimentary thinking, and because AIMind-I.com by Frank J. Russo (FJR) is an entirely separate AI spawned by the original Mind.Forth AI. FJR and I communicatle only loosely, mostly in public forums. FJR recently made his AI Mind able to receive and "think about" e-mail messages. FJR has been mainly copying the source-code of Mind.Forth, but now he could very well use the Wikipedia links to understand the underlying theory and to develop an AI more advanced than Mentifex Mind.Forth AI.

Salmoni conludes above by saying...

I know this sounds very humanist(ic), but it's an important part of AI and psychology: how does a system of abstract representation "ground" itself sufficiently to make meaning in terms useful to a human or at least unto itself? Without that, there is no intelligence.
I agree. And Mentifex AI provides for grounding in both theory and in the Sensorium Mind-Module. The "abstract representations" in the AI Mind need only to be elaborated upon with sensory memory data, such as the redness of an apple and the redness of blood, to let the concepts of "apple" and "blood" both be grounded in the sensory knowledge of redness. Now I saw "Good Will Hunting" several times and I really enjoyed the movie, but I do not aspire immediately to the poetic heights that you have so forcefully quoted above. My work on AI proceeds little by little, webpage by webpage, Wikipedia-link by Wikipedia-link. 'Nuf said, especially to the "Chicago" person above who wrote that Mentifex
doesn't support the community by ever replying or opening up discussion which might be valid.

Ahh proven wrong, posted 21 Sep 2007 at 10:11 UTC by Chicago » (Journeyer)

Mentifex replies! I apologies and retract my comment.

---

salmoni - You give a comparison of an AI against Will before his revelation in Good Will Hunting, suggesting that an AI who can regurgitate facts is not true intelligence, but Will does have free will - he has the ability to look at a painting made by the doctor, and he decides to use the analysis and background information that he has gleaned to insult the doctor, and "rip his life apart".

If an AI exists in the virtual sense, does it need to understand certain concepts about the world? If wikipedia provides solid information about virtual things, and more abstract information about others, is that not enough - consider the wikipedia pages on Magnetic stripe cards: http://en.wikipedia.org/wiki/Magnetic_stripe_card#Financial_cards . Should an AI find a card in one of its "sensory inputs" it could use this as a basis for decoding the information that it receives.

Here's a question. Would an AI using Wikipedia as the base of its information be allowed to edit Wikipedia?

I aught to mention here that I oppose AI and the idea of it, mainly because I'm quite a feared by what the results might be if successful. (by that I mean true AI not just petty mathematical models, or clever pattern matching systems, I mean true, free-will programs or machines)

Wikipedia?, posted 21 Sep 2007 at 19:56 UTC by fzort » (Journeyer)

Why not start with, say, The Cat in the Hat instead? With a vocabulary of ~200 words and very simple grammar, it should be a walk in the park for AI4U. Please let us know when it's ready to answer questions about the Cat.

Cite Some Reference..., posted 21 Sep 2007 at 21:11 UTC by nymia » (Master)

It would be to nice to see some references. For example, under LangComp...

http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/ptopic?topic=language-comprehension

Another would be, more published papers on stuff like ~~~ Markov Chains ~~~ point out the significance of how words can have relevance to other words, for example.

Identifying these building blocks can be complex, but you never know--Google may have figured it out with their algorithms. Google deals with these things on an hourly basis, though.

Proof!, posted 25 Sep 2007 at 03:47 UTC by ncm » (Master)

I submit that nymia is in fact Arthur's proposed AI. Can anybody prove otherwise?

Good joke, Arthur, but I'm onto you.

bring it on, posted 27 Sep 2007 at 20:42 UTC by lkcl » (Master)

ha ha make that three Master Certs to mentifex - bring it on: start a de-cert war if you like. i'll put it up on my trophy list, which is getting a bit long ha ha.

to the topic at hand.

yes - i've worked with people who have done "knowledge classification" using advanced bayesian inference to perform automated "category" detection. a stage further on this is to make it recursive: once you have derived enough "categories", you need to perform bayesian inference on the categories in order to make sense of those.

wikipedia is indeed a large enough database of information.

however, from what i gather from a friend of mine who has studied quantum mechanics for forty years, classical computing - ones and zeros - is never going to be enough to derive "true" intelligence.

he postulates that you would require a "fuzzy logic" style computing engine - one in which quantum tunneling effects are deliberately not ruled out.

ironically, the design of such a system would require throwing away entirely all of the rules developed to date in stabilising of silicon circuits - quantum tunneling, ringing, etc. because those are exactly the kinds of effects that you need to introduce.

and regarding google: google is into organising the world's "information". that's becoming increasingly old, as there is far too much information. what we really need is the world's _knowledge_ to be organised.

Not original, posted 27 Sep 2007 at 22:11 UTC by fzort » (Journeyer)

Roger Penrose hypothesized a while ago that human consciousness is the result of quantum effects in microtubules in the brain (see his book "The Emperor's New Mind"). There's no evidence that supports this. The idea was not particularly well received.

doesn't matter, posted 28 Sep 2007 at 06:51 UTC by lkcl » (Master)

consciousness is sufficiently far advanced and so horribly recursive that it even does my tiny mind in, and its creation in silicon has eluded all of us.

a theory of consciousness is first required, in order for intelligence to be brought about (i don't agree with the word "artificial" - there's nothing "artificial" about any kind of intelligence).

so of _course_ there's no "evidence".

and a quantum state that stores information goes well beyond the "neurons" - into the realms of energy fields that extend into the entire nervous system, the surrounding environment, and also into the physical world via movement, pheromones, vibrations - the whole lot.

so of _course_ thre's no "evidence".

for many people - in fact i'd go so far as to say that for pretty much everybody on the planet right now, it's simply too much.

interesting ethical question for you to consider. if a consciousness _could_ be created (using silicon, somehow), it would clearly need to feel happiness and clearly need to feel pain. would it be _ethical_ for humans to treat another consciousness as badly as they treated slaves, as badly as they treat animals?

would it be ok for humans to put such a consciousness in constant pain?

this is an important question. any scientist and mathematician who has researched the area consciousness sufficiently to be able to model it is going to be sufficiently spiritually advanced such that they will be considering this kind of question. "will my fellow humans, who enslave their peers and treat other conscious beings with such immense cruelty, also treat an advanced silicon consciousness as an 'object' to be tortured?"

in humanity's search for "artificial" intelligence, you already have the answer - the key is in the word "artificial" and also in the way that humans exploit absolutely everything that they can (corporations, "intellectual property").

imagine what would happen, in the current pathologically-insane corporate-run world, to a conscious silicon intelligence - it would be treated, i guarantee it, as "property".

as a slave.

to be experimented on, like the nazis did on jews in the 1930s and 1940s (the medical profession's knowledge about pain comes mostly from those experiments, because the nazis documented their research, thoroughly and efficiently, as scientists).

in other words, what i'm saying is, in a rather round-about way, that free software developers have rather a bit more responsibility on their hands than might first appear. our stand against the "ownership" of intelligence (viz "intellectual property"), to show people and corporations that there is a better way, has to clear the way first.

only then, once there is no risk that a silicon-based consciousness will be "owned" and be tortured and enslaved by callousness, will scientists and mathematicians be able to work in good conscience, to create a consciousness worthy of the word.

Re: doesn't matter, posted 28 Sep 2007 at 08:01 UTC by bi » (Journeyer)

for many people - in fact i'd go so far as to say that for pretty much everybody on the planet right now, it's simply too much.

Oh yeah... for most of us ignorant sheep, it's simply too much to certify someone as Master based on pie-in-the-sky rantage from vapourwareland.

Chicago's comment (which he has retracted, but well) talked about "visionaries". Well, the way things are going, it does seem awfully easy to be a visionary these days.

and a quantum state that stores information goes well beyond the "neurons" - into the realms of energy fields that extend into the entire nervous system, the surrounding environment, and also into the physical world via movement, pheromones, vibrations - the whole lot.

so of _course_ thre's no "evidence".

In other words, there's no way to prove Penrose's theory false. Which only means that Penrose's theory has no relation whatsoever to reality.

Forth, posted 28 Sep 2007 at 08:18 UTC by lkcl » (Master)

arthur,

forth is, i intuitively believe, the engine behind DNA. it's certainly the engine behind nanotech. i believe that a little research will show that a forth-like engine behind DNA will prove to be similar to a turing machine.

the interesting thing about a forth stack is that it is a very very close representation of a quantum wave function.

it also should come as no surprise that an LR syntax parser can be written in forth in about 8 lines of code - that's fast enough and elegant enough to actually not need to be more advanced (LALR lookahead).

massively recursive and self-referring levels of parsing, along with phase-change representation, detection and inference, along with dempster-shafer-style knowledge inference, along with feedback loops that create and represent quantum singularities, are what make up consciousness as opposed to "artificial" intelligence.

we're looking at a level of elegance, far beyond our own arrogance, that should come as absolutely no surprise. evolution has after all had a hell of a long time to refine representations of intelligence down to the absolute minimalist degree.

Re: Forth, posted 28 Sep 2007 at 13:33 UTC by mentifex » (Master)

Luke,

Thanks for your recent certification of what user bi calls my pie- in-the-sky rantage from vapourwareland -- that description got a good laugh out of me. I went to Bi's user page to get its URL, and I was pleasantly surprised to see a copy, in Greek no less, of the "Kuriake Proseiche," or "Lord's Prayer," in the midst of Bi's diary entry for 6 September 2007. Stopping to read the Lord's Prayer in Greek (my BA degree is in Greek and Latin), it got me thinking that here I am among all these computer wizzes (whizzes?) on Advogato, and I barely understand what, no actually, I don't understand -- it's Greek to me -- what Bi is talking about ("bogotified"? does that mean, "made bogus?) in his 6sep2007 diary entry. Upshot: I can read the Greek in his entry, but I can't read the technical jargon. So for Mentifex here it's a reversal of the customary geek experience.

As for the certification, I don't dare reciprocate right now, or Advogaticians might say that I or you and I were gaming the system. But I would like to say that I am glad to be able to post an Advogato article now and then, but I don't care about the Master certification, because the level just below it is sufficient for being able to contribute articles. About which I would like to say that, in the future, if Mentifex AI turns out to be on the right track, then I will probably get blamed for not pushing even harder to circulate AI memes.

Now, about your Forth language musings above, Luke. You left me in the dust just as much as Bi did, with your celestial take-off into musings far above my level of geek/philosophical competence. But I will use the occasion of this courtesy-reply to venture a few low-level thoughts on Forth and to report what happened to me earlier today.

What happened was that for the first time I sent an e-mail to the mentifex-class artificial mind at http://AIMind-I.com (Franks AI Mind) and it seemed to make a valid response at its online Web-presence. Not realizing that there was a special format to be followed, I simply declared a subject of "cats" and I sent "cats eat fish" as a message body. Then quickly I went and looked at the AI Mind website and I saw that the most recent thought of the AI Mind was, "CATS CHASE BIRDS." So the AI already knew something about cats.

Poking around on FJR's AI Mind site, I discovered http://aimind-i.com/4th- email.htm where Frank shares his knowledge of how to let a Forth program, AI or not, receive and process e-mails. Now, Luke, I don't have the Forth expertise or Windoze expertise (I'm an Amiga 1000 guy) to figure out how to give an AI Mind the ability to input e-mails, as Frank J. Russo has done. Nevertheless I expect in October of 2007 to do some new Forth programming in my Mind.Forth AI at http://mind.sourceforge. net/mind4th.html simply because I "feel the itch" again. That is, I feel the itch to code the AI further in both JavaScript and Win32Forth. I would like to acheive a cognitive chain-reaction, that is, debug the AI a lot further so that it thinks on and on without derailing into pie-in-the-sky rantage from vapourwareland or other aberrations. For instance, I would like to try using excitational inhibition to let the AI traverse its knowledge base (KB) and utter all possible thoughts on a subect. If it thinks, "cats eat fish," then the word "fish" gets deactivated and the AI is able to proceed to "cats eat mice" because the word "mice" has not yet been deactivated -- that sort of thing. In other words, I plan to bumble on in Forth and JavaScript, and if anything unusual happens during my sojourn in vapourland, you may see some pie-in-the-sky rantage about it right here on Advogato.

- Arthur

Re: Forth, posted 28 Sep 2007 at 18:10 UTC by bi » (Journeyer)

what user bi calls my pie- in-the-sky rantage from vapourwareland -- that description got a good laugh out of me. [...] Upshot: I can read the Greek in his entry, but I can't read the technical jargon.

You obviously don't know what you're talking about, or what the term "vapourware" means. And I doubt that you care to actually find out.

circulate AI memes

How many times do we have to say "no" before you'll acknowledge "no" as an answer? How many times? And just to reiterate: no, we don't want any of your "AI memes". Stop. Now.

Intentional humor?, posted 29 Sep 2007 at 07:32 UTC by ncm » (Master)

In one sense, I have to agree with Arthur here: I can't tell who Luke is making fun of, going "Forth is the engine behind DNA" and all. Maybe because this is the "forgot my meds" channel on Advogato, it doesn't matter.

But, Luke, if you're serious, it's past time you understood that anything you are inclined to believe intuitively is just wrong, except where you have actually constructed the chain of evidence to demonstrate it. Intuition is a great source of inspiration, but even more so of tempting falsehoods. "What gets you in the end isn't what you don't know, its what you think you know that isn't so." Rooting out the latter is hard enough without adding more with abandon.

ha ha, posted 30 Sep 2007 at 08:31 UTC by lkcl » (Master)

it's all good clean fun, guys and girls.

i did a little research into knowledge processing, information processing, bayesian (and its generalised theorem dempster-shafer) inference, and grammar parsing, a few months back. i investigated leo, freemind and a few other systems which aim to help automate information processing to derive knowledge. oh. and goldparser.

goldparser is possibly one of the most valuable information processing and reverse-engineering (knowledge-derivation) tools ever to come out as free software.

i was very surprised at the level of recursion involved, in information parsing. _really_ surprised.

i must apologise - i really must. this particular area - automated knowledge derivation - is one in which research is simply... completely lacking. plenty of research is done into "AI", plenty into "expert systems" - but nothing like the kind of multi-level recursive grammar-parsing or automated grammar-parsing that's _really_ required to infer knowledge.

the key to intelligence is knowledge inference. that means that you need a system which is capable of automatically working out its own grammar (dempster-shafer inference), then, using that grammar, automatically working out its own syntax. then, using that syntax, automatically working out its own .... i don't know what comes next!

you see what i'm getting at? it's all so incredibly recursive and self-referring that it's ... mind-blowing.

anyway - my apologies to all because this is the first time i've written down some of my thoughts over the past eight months on these topics and so it's mostly glimpses rather than actually cohesive and... useful stuff ha ha :) if there's anyone out there interested in doing silicon consciousness who has a budget of about $100m to spare i'm happy to make things happen :)

coherent intuition, posted 30 Sep 2007 at 08:46 UTC by lkcl » (Master)

ncm, darling, i know you mean well, and i do understand what you're on about. feel free to classify what i've written as brainstorming - but don't dismiss it out-of-hand. in fact, please don't dismiss anything _anyone_ says out-of-hand.

just in case.

i don't know if i'm on the right track any more than you do - nobody does, until exactly as you say, there's proof (and that requires time, and it requires resources - so somebody give me some, and i will get results...) this area's been researched for decades, now, with no really significant breakthroughs. so something different - a different perspective - is worth mentioning, worth exploring, yes?

so - it's time for you to stop dismissing, stop resisting, stop criticising. especially on public forums. other people, skilled in the field of consciousness research, can perfectly well assess what i or anyone else has written, without needing to listen to yours or anyone else or even my own doubts yes my own doubts as to the usefulness of what i've written.

this is _basic_ stuff, ncm, that after seven years of running advogato i really shouldn't have to point this out any more and i'm really sorry to everyone reading it, and especially to you, ncm, for having to remind you and take up everyone's time.

please - y'know - just... ask yourself the question: "what am i going to achieve by criticising luke (or mentifex. or anyone else.). _again_." we're not going to stop - this is the internet!! go with it, for goodness sake, or just be content that the communication isn't intended for you, and let it go.

sorry folks.

Re: coherent intuition, posted 30 Sep 2007 at 09:46 UTC by bi » (Journeyer)

i don't know if i'm on the right track any more than you do

...but somehow you get this idea that we should just shut up, while you should continue talking. Um, excuse me?

ask yourself the question: "what am i going to achieve by criticising luke (or mentifex. or anyone else.). _again_."

Actually, I'm hoping that mentifex would just direct his energies to something more fruitful than continually making wild claims on Advogato. Like, you know, creating a polytonic Greek typesetting program for his Amiga 1000. That, I say, will be an ├╝ber-cool project.

The rest of your last post was just a blown-up version of the totally generic "as we know, there are known knowns, known unknowns, and unknown unknowns" line. Which, as with everything else, tells us exactly zilch on what's so novel and earth-shattering about mentifex's `ideas'.

what's so novel and earth-shattering, posted 30 Sep 2007 at 14:57 UTC by mentifex » (Master)

Because I grew up in an era before there were personal computers (other than the Brainiac which I received for Christmas when I was twelve), my trajectory into becoming a so-called AI crank evolved at such a glacial speed that the AI quest planted itself firmly and inextricably within my psyche. As a college sophomore at age nineteen, I formally began my project by starting a personal AI notebook of thoughts and conjectures, while also running neuronal and alife experiments using several dozen electromechanical relays.

After a dozen years of theorizing about input devices, memory channels and robot outputs with volition or free will, one day at age thirty-two I was visualizing my design for memory channels flowing in parallel and I was trying to figure out the correspondence between image engrams in the visual memory channel and word engrams in the auditory memory channel. On the simplest level I could see a one-to-one two-way out how multiple images would evoke pluralized English WORDS, or how a sequence of action-images would select a particular VERB in auditory memory. It suddenly dawned on me that there was something extra, a kind of dark-matter-of-the-universe, lying interstitially in the interstices between or among the concrete sensory memory channels, a kind of abstract memory channel or maybe what people call a "semantic memory" channel.

Then for several months I was perplexed and despairing of my ability to do any further work on AI, because I could impute the existence of the abstract memory but I had absolutely no idea of its nature.

Eventually the mental logjam began to thaw and break apart. In my personal AI journal at http://mind.sourceforge. net/theory3.html I began to speculate on the nature of verbs and on their representation in the human brain. Thus the Mentifex AI design began rushing out of me over a six-month period. As my 33rd birthday approached, I tried to publish my new theory of mind in a series of three scientific journals which rejected my paper as being "speculative" or "inappropriate" or not "hard science," so my AI project went into stagnation for a decade.

When my twentieth anniversary since college graduation occurred, I decided that I had to start pushing my ideas in earnest or they would never fructify. Using Amiga desktop publishing, I paid for a Mentifex AI ad in a neural networks journal, and the project has been up and running ever since. Meanwhile, the lone individual Tim Berners- Lee created the World Wide Web (what a kook TimBL must have seemed like at first) and Mentifex AI went Web-borne.

To sum it all up -- because I believe that I found the Holy Grail of Neuroscience -- how the mind emerges from associative neurons -- just as Martin Luther once defiantly wrote -- "Hier stehe ich; ich kann nicht anders. Gott huelfe mir."

Re: what's so novel and earth-shattering, posted 1 Oct 2007 at 17:11 UTC by bi » (Journeyer)

OK, let me get this straight. You went to college at age 19, you came up with some ideas, and your paper submissions were rejected, and you compare yourself to Tim Berners-Lee and Martin Luther, ergo your ideas are obviously novel and earth-shattering?

What?

You see, the only way to show that your ideas are actually novel, is to compare them with past ideas for solving the same problem. I mean, how do you get off claiming that your idea is different, if you don't even have any idea of what it is different from? And no, coming out with an idea on your own doesn't automatically make the idea original -- independent discoveries of the same idea is something that happens quite a lot in science.

And comparing yourself to luminaries is nowhere near an original idea. Once more, Galileo wasn't burnt at the stake.

Re: what's so novel and earth-shattering, posted 1 Oct 2007 at 18:57 UTC by mentifex » (Master)

User Bi wrote:

OK, let me get this straight. You went to college at age 19,
Went to college at 18, started AI project at 19.

you came up with some ideas, and your paper submissions were rejected, and you compare yourself to Tim Berners-Lee and Martin Luther, ergo your ideas are obviously novel and earth-shattering?
No. Tim Berners-Lee is an example of a lone individual achieving something that darmatically affects ten billion other individuals. It's not that I compare myself to TimBL, but that I take inspiration from him -- likewise from the defiantly lone-individual stand of Martin Luther, saying "Here I stand; I cannot do otherwise. God help me."

You see, the only way to show that your ideas are actually novel, is to compare them with past ideas for solving the same problem. I mean, how do you get off claiming that your idea is different, if you don't even have any idea of what it is different from?
It is different from the vacuum that existed there before.

And no, coming out with an idea on your own doesn't automatically make the idea original -- independent discoveries of the same idea is something that happens quite a lot in science.

Now to report some breaking news...

http://tech.groups.yahoo.com/group/aima- talk/message/784

It would be nice if future editions of the AIMA textbook were to include some treatment of the various independent AI projects that are out there (on the fringe?) nowadays.

http://mind.sourceforge.net/Mind.html in JavaScript is an AI Mind tutorial program that demonstrates the important technique of "spreading activation" at work.

http://AIMind-I.com by Mr. Frank J. Russo is a program in Forth spawned by the SourceForge Mind project but running independently and now able to receive e-mails.

http://mentifex.virtualentity.com is a site devoted to Wikipedia-based open-source artificial intelligence -- the idea that Wikipedia is where students of AI may not only learn the multidisciplinary subjects needed for creating true AI, but may turn around and write contributions to the very articles being studied.

http://www.mail-archive.com/agi@v2.listbox.com/msg07444.html is the same message posted to the Artificial General Intelligence (AGI) mailing list. Both the aima-talk post and the AGI list-post may be deleted by the powers that be, but for one brief shining spot today a blow has been struck by very-truly-yours Mentifex for Wikipedia-based open-source artificial intelligence.

Re: what's so novel and earth-shattering, posted 1 Oct 2007 at 19:40 UTC by bi » (Journeyer)

It is different from the vacuum that existed there before.

The "vacuum" is only in your head. Tons of academic papers on machine translation, speech recognition, natural language generation, and other related fields, do not a "vacuum" make -- and you know it. You don't get off claiming your work fills a "vacuum" just because you can't bothered to read up on all the previous work.

Evolution Potential of Cluebot, posted 26 Oct 2007 at 09:21 UTC by mirwin » (Master)

Chicago: Bots that are not AI are already allowed to edit Wikipedia. Effective developers include emergency cutoff switches for administrators and respond regularly to users reporting errors or confusing behavior by the bot.

Cluebot is interesting to me because he routinely reverts routine vandalism within minutes of its occurence. As can be seen by comments on Cluebot's userpages, human vandalism fighters have been highly impressed and in their comments tend to react to cluebot as an entity.

As can be verified by Cluebot's source code it uses some pretty basic techniques to change the economics of vandalism. Now instead of some teeny bopper deriving huge satisfaction from wasting another person's time cleaning up their mess or joke within Wikipedia, Cluebot consumes a tiny bit of bandwidth and processing time doing the same and when detecting a persistent pattern of vandalism potentially reports the ip address and/or account to human administrators for further action.

Some of what I find very interesting in Cluebot as a precurser of AI possibly evolving on Wikipedia (wikimedia based virtual environments) is that we all know teenagers can read code as well or better than the rest us. They often have incredible amounts of time and energy to expend with monomaniacal energy and focus. When Wikipedia's bot developers start including some of the latest (and earliest and in between) AI techniques to achieve better returns on investment of resources dealing with vandalism, whether injected by human effort directly or by vandal bots .... then it seems like a race to the potential singularity conditions implied by mentifex will be on.

There are already many mediawiki based sites online, single computer installations as well as behomeths such as Wikipedia with server farms. Scientists are working with the mediawiki code to develop a peer to peer capability. Online communities of humans and bots will continue to evolve to create and maintain online information resources. How long can it be until some form of recursion and/or mutual modification between bots that routinely interact with each other, with various tools and databases ranging from simple indexes to online tailored versions of books and libraries, and with humans starts to look like something recognizable in humans.

Perhaps the AI that has been so difficult to create by design can be achieved on the internet by allowing pieces to interact with each other and with humans while the designers experiment and keep pieces that look useful. Consider that by modifying the lists of data used by humans and other bots alike Cluebot is already modifying human behavior and possibly other bots behavior. Programming is creative high end behavior but it is still behavior. Is it such a stretch to imagine that an environment can and will be achieved where large groups of bots or "agents" that are increasingly allowed and encouraged to modify each others databases, tool behavior, and programming will begin to display life like characteristics such as intelligence?

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page