On Surviving Singularity

Posted 13 May 2000 at 22:48 UTC by ping Share This

By now, probably everyone here has seen Bill Joy's warning to the technology community that we may soon be in over our heads. Although this has raised a lot of public attention about issues that deserve serious thinking, some of us are worried that Joy's proposal to relinquish technology research would take us down a much more dangerous path.

Please read this responding article by Virginia Postrel for a good articulation of the other side of the issue.

How do you think we should deal with the potential dangers that lie ahead? Let's start talking, and getting people thinking about it.

We will be tackling lots of these issues at Foresight's "Confronting Singularity" event next weekend. Your thoughts on the topics listed there are welcomed!


Some quick points, posted 13 May 2000 at 23:52 UTC by sneakums » (Journeyer)

  1. There is no "everyone" on the Internet. I don't as a rule read Wired.
  2. There is no "nanotechnology". Apart from the usual crap we get on Slashdot, the furthest nanotechnology has taken us to moving small numbers of atoms small distances in order to spell the names of companies that should know better.
  3. The Unabomber is a cold-hearted killer, not a prophet of a new tomorrow.
  4. Robots are not inherently "superior" to humans; the fact that mammals were able to oust marsupials from South America is not relevant. The basic assumption of robotic superiorty is flawed.

I could read the rest of the article, but I am drunk and tired, and if the robots take over tomorrow, I won't really care. Kill me, please. Let me sleep at last.

Human technology, posted 14 May 2000 at 03:16 UTC by lilo » (Master)

I guess some of us are not as inclined toward 1960's machines-take-over scenarios as others. Diffidently I will point out that the technologies of creating intelligent machines are human technologies, and the creatures we create are apt to be an awful lot like ourselves.

Would I be willing to "live silicon" to live to a ripe old age, as opposed to dying at age 70 or 80 or 90? Sure. It's a perfectly natural progression of technology, brought to you by the natural universe, which occasionally brings us intelligent creatures.

Nanotechnology is real, posted 14 May 2000 at 07:27 UTC by ping » (Master)

Please take this seriously. May i suggest some required reading before one attempts to declare that "There is no `nanotechnology'."?

At the very least, read the first chapter of Engines for an overview and responses to some common objections.

Spelling the names of companies with atoms was new in 1986. It should be clear from a quick glance at the abstracts from the last MNT conference that a great deal of progress has been made since then. Two different groups have built very basic molecular motors recently, for example (more details here).

Anyway, this article was not posted specifically to open a debate about nanotechnology, though it would be interesting to explore the policy issues there. It should be clear that there are several technology areas (of which nanotech is one, but a big one) that could threaten or transform the existence of humanity as we know it.

This isn't really a community of nanotech researchers; it's a community of technologists involved in a variety of things that might end up changing the world. But we want to change the world safely. So this seemed an appropriate place to open the higher-level discussion of how to deal with potentially dangerous technology research: try to prevent it, promote lots of investigation, or do something in between?

Re: Nanotechnology is real, posted 14 May 2000 at 13:25 UTC by sneakums » (Journeyer)

I aplogise for my facetious and unthinking response. I'll go and read Engines now. Looks interesting.

The article said nothing, posted 14 May 2000 at 22:08 UTC by dan » (Master)

I could summarize the quoted article as "Bill Joy is wrong because it would be impractically hard to turn back the tide now"

Which may well be true, but is not really all that insightful.

Actually..., posted 14 May 2000 at 23:55 UTC by mwimer » (Journeyer)

A btter summary of Virginia's article would be:

...I read this atricle by Bill Joy, a supposed scientist, and was sort of incensed by the drivel that he spewed from his supposed scientific mouth...

I think Virgina's take on Joy's article comes pretty close to home for most of us "Computer Scientists." I'll believe in this mythical creature when i meet him/her face to face. Just like Virginia i conceed that Joy is a smart guy but his reasoning has no basis in true scientific reasoning.

Its always sorta sad to see top people in our field make us all look like complete morons before the scientific community. Sure these might be reitteration of true scientists' concerns, but, you don't see them stopping thier research. They just have the forsight to see that all technology can be used for good or bad, and are preparing people for possiblities the furture presents.

Re: On Surviving Singularity., posted 15 May 2000 at 09:05 UTC by k » (Journeyer)

I agree with points in both Bill and Virginia's articles. I think Bill has got the right idea but is attacking it from the wrong angle, and I think Virginia didn't figure out what Bill is actually on about. It is not a question of whether we should continue or stop research on various scientific topics, its whether we will be responsible with it.

Personally, I don't think the human race is ready to take full responsibility for the kind of research that is taking place today. Take the internet (ha!) as a case in point - years ago, even newbie-type sysadmins and programmers realised the dangers lurking in the internet structure. But the users of this internet were a (generally) responsible bunch, and if you weren't responsible, you just lost your access. Today, the same fundamental internet is still there, but the access is much more open. Do you ever hear of responsible users making the internet a better place? Perhaps. But, do you hear when an anonymous 'hacker' (for various incorrect uses of the word hacker) uses a shrink-wrapped script kiddie program to take down a set of very large websites? I saw it at the airport flying home. I saw it on the plane. I saw it when I got home 8 hours later. I didn't stop seeing it for weeks.

Take Bill's article and map it onto this type of scenario, which has already happened as is well documented. Imagine if the script kiddy was a slightly unadjusted person, and the internet was ourselves and our planet. All it takes is one slightly clueful person to take something like genetic enginneering or nanotechnology and similar havoc could be wreaked on ourselves. Or, a company with a grasp on a technology who develops an idea for commercial gain but without full understanding of what could happen? You might not think it feasible but we have plenty of examples of just that happening today, and the only reason systems haven't spiralled down into destruction is because a small group of people run around trying to keep things together. Bill wasn't suggesting science stops for the sake of stopping, he's suggesting science stops until we become responsible enough to use it. How we get to being responsible is a totally different discussion.

sqeamish ossifrage, posted 15 May 2000 at 15:27 UTC by graydon » (Master)

Reason is a stock laissez-faire economic propaganda outlet. They're not terribly interested in the issue any further than the fact that government stays out of the way of industry.

But that's a side matter; what really bugs me is that Joy, after all the years in software development, really thinks anyone has the brains to develop nanotech or biotech that's nastier than mother nature's brew. What exactly is going to give these little buggers any more of a chance against the noisy, competitive, hostile natural world than your average bacterium? Sure, we've had plagues which wipe out a lot of people, and wiping out a lot of people is definitely a bad thing; but self-reproducing, hostile, all-consuming engines of destruction are nothing new. Such things are practically the rule of living on earth.

Where will they get their power? How will they defeat all our active immune systems? How will they avoid become food for one another, or for other creatures? How will they be able to move and spread? Who will co-ordinate them? How will they get supplies which are not locally available? How will they adapt to new strategies in their enemies? How will they evade detection? How will they make strategic decisions?

The problem with becoming an all-powerful badass destructo-machine is that you need to solve these problems. They're real problems which make or break a global genocide campaign. If you're easily disabled, captured, immobalized, deactivated, innoculated against, or counter-attacked, you lose. All these issues have been fought out over many millions of years, and we are currently the reigning champions. Joy's suggesting that a bedroom haxx0r can develop something to out-do every round of battle our genes (and brains) ever won against anything else? Seems pretty unlikely.

Give us enough time and opportunity and we will wipe ourselves out, posted 15 May 2000 at 17:57 UTC by Rasputin » (Journeyer)

I certainly sympathize with Joy's assessment of the extreme level of danger we face now and in the not-so-distant future. To put it into context, when I went to high school (about 20 years ago) nobody would have believed that a student (or group) would bring guns to school and open fire on the other students, killing indiscriminately. "That will just never happen..." strikes me as the response you would have received for suggesting such scenarios. Then our ability to manufacture guns cheaply and in huge amounts coupled with a rapid decline in social skills (but I only watch 8 hours of TV a day, why should I go outside and play with kids I don't like) means this almost had to happen. Ten years (or so) ago, when most of us started to hear about the internet for the first time, we would have laughed at the possibility of 16 year old kids doing extensive damage, and potentially being able to cause $millions ($billions?) of losses to businesses that need the internet to survive. Now that computers and internet access are easily had commodities, how can we be surprised when it happens. Advances in technology have generally included advances in our ability to hurt ourselves and others without an associated increase in self-restraint.

I certainly also see some things in Postrel's response to agree with. Asking scientists and technologist to stop advancing the state of the art would be like asking them to stop breathing. You could certainly make the request, but don't be real surprised when you're ignored. As well, it's hard to justify forgoing the potential benefit of these advances because of the potential pitfalls. Especially when most of the harm done to ourselves and the planet could have been avoided with a little self-restraint and personal responsibility.

I see a lot of work being done to advance technology (and, since I participate, I guess I see it as a good thing) but I don't see any meaningful work being done to advance us as social and responsible beings. I hear a lot of talk about social and personal responsibility, but a lot of people get really quiet really quickly when they're asked to inconvenience themselves to help others. I spent several months in Rwanda during the civil war there, and the only technology they needed was machetes to kill a million or so people. That most of the world averted their eyes (as in Sierra Leone now) only hi-lights the problems. Technology is only half the problem, the other side of it is what we choose to do with it. Unfortunately, our decision making ability (computer assistance not withstanding) has apparently not evolved since shortly after we learned to stand upright and swing a club at the lions to get some food.

intelligence, posted 15 May 2000 at 19:28 UTC by stefan » (Master)

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them.

What I hear out of this assumption is the reductionistic viewpoint that all it needs to have intelligent beings is a brain-like device. Even if there were devices comparable to the human brain, I think half of the requirements is still missing:
The reason we are what we are is not the coincidental fact that we are equipped with a complex high capacity brain, it is the social environment and culture we are living in, which goes of course hand in hand with the development of the brain.

As long as technocrats are focussing exclusively on the 'hardware', I don't see us coming close to the above assumption.

On the danger of biotech, posted 16 May 2000 at 09:46 UTC by eivind » (Master)

Graydon writes: what really bugs me is that Joy, after all the years in software development, really thinks anyone has the brains to develop nanotech or biotech that's nastier than mother nature's brew.

I happen to agree with Joy here, at least for biotech. You do not develop biological weapons from scratch. You base them on existing diseases.

According to biotech people I've talked with, it is rather easy to combine virii in such a fashion that one virus act as a carrier, and when it breeds (using a host cell), another virus is also released. For instance, you could combine influenza-virus with HIV, so that anybody that caught the flu would afterwards be HIV-infected.

A lab capable of doing this stuff (you need to send off for replication, and pay per sequence in addition to this) should run at less than $60,000,- (again according to biotech people - I do not have expertise in the area.) The aggressor would also have to get hold of influenza or cold-viruses and a HIV strain.

Eivind.

Problems with biotech, posted 16 May 2000 at 21:36 UTC by jab » (Journeyer)

eivind:

According to biotech people I've talked with, it is rather easy to combine virii in such a fashion that one virus act as a carrier, and when it breeds (using a host cell), another virus is also released.

You basically want to do the same thing for gene therapy, as well. Except, of course, you put good genes in place of the deadly virus. However, gene therapy still doesn't work that well. I don't know all of the details, but I imagine one problem is that the viral capsid containing the new (R|D)NA will be less stable that the native virus, as the structure evolved with the sequence (obviously) and the capsid likely makes use of the (R|D)NA as "scaffolding".

This could be overcome, of course, by selection of the two viruses, and careful integration of the two sequences. But, it's still not easy. If it was, we would have cures (or very effective treatments) for most genetic diseases.

In general, Graydon is right. Things we make are generally not nearly as effective or efficient as what has been evolving on its own. Biological systems have an annoying tendency to balance out. When we mutate a protein to make it bind to its ligands tighter, rather than make a more "powerful" protein, it generally results in a loss of stability, and thus, no net effect on activity. It's not a fundamental property, though, you could increase affinity and stability, but it's much harder, and it requires the application of selective pressure (which can be tough and expensive to do when you're trying to make a super-lethal virus [ie. lots of incredibly dangerous mice and/or monkey carcasses]).

Summary: It's extremely unlikely that some terrorist is going to be able to construct a world-wide plague in their basement. They could cause some damage, but probably not a whole lot more than if they had just spent their money on bombs.

Tangent: What should be worried about is what agricultural biotechs are doing (like Monsanto and whoever is making the Bt-expressing crops). The problems they're creating, though, aren't from lack of biological understanding; they're the result of a greedy, destructive, archaic, and exploitative economic system.

Stagnation more deadly than development, posted 16 May 2000 at 23:33 UTC by Uraeus » (Master)

Bill Joy is correct that these future technologies might be the end of mankind, but the current situation is a definitive killer of mankind.

Just increasing the number of cars per person in China to a level equal that of the western world would bring enough pollution to poison us all to death. The failure of international environmental conferences and treaties shows us that the politicians doesn't have the ability to solve this.

The large scale desctruction of nature and biodiversity brought on by overpopulation, will if not kill us, leave us with a rotten shell of a planet. The failure to halt the growth due to religious, cultural and political reasons shows us that this problem will not either be solved by political leaders.

The increasing danger of a plague killing of most of the worlds population brought on by the high use of medication (like antibiotics) and the high density of population brought on by centralisation and overpopulation has not been addressed at all by world leaders.

New technologies might actually save the world because they will enable us to solve many problems cheaply and without political agreements needing to be made. So even if these new technologies might be our doom, not developing them will definetly be our doom.
Humanity is ruled by fear and religious superstition, and that will be the thruth until natures or maybe a selfmade evolution takes us beyond our current level. Until that point we need technology available which will enable the few to mend the destruction indirectly caused by the existence of the many.
The chance that these technologies ends up in the hands of people deeply afflicted with the psychological weaknesses of the many, and as such be tools of immense desctruction, can not be allowed to stop the development of said tools, because without them humanity and the surviving members of earth's other lifeforms are surely doomed.

The singularity, posted 19 May 2000 at 12:42 UTC by Excalibor » (Apprentice)

This is an interesting topic and reminds me of Vernor Vinge's Singularity. There is a lot of info about it on the Internet, see Singularity

I find the argument attractive, it's unanthropocentric enough for me, but many people still addresses it from an anthropocentri way. One cannot argue that many wonders are to be seen if we survive enough without killing ourselves to get there. But I don't see the point to enforce human-like minds to machines. After all, there are a lot of aminals out there which have survived much much longer than we have... and many of them are very intelligent (whales, dolphins, etc... ants nests as a superorganism, etc) and adaptative, which is the intelligent response to evolution pressure.

Actually, the bottom-up approach of simple robots with simple behavior that produce complex emergent behaviors is a good indicator that intelligent machine probably have to evolve from stupid little minds. If that's so, even having a Lamarckian evolution, which is way faster than Darwinian, they'll have to go through lots of problems and a changing environment. That means the final result (whatever final may mean here) will probably be very different to our mind.

On the other hand, we are probably living the start of a new thinking way (mmm... Lévi-Strauss term, so it's probably badly translated from Spanish modo de pensamiento). After the original savage(let's call it type A) and the current since the Neolithic domesticated (type B) (which refer to the conception of space in our minds), the release of hypertextual technologies and easy-to-access worldwide communications may be creating a new way (type C?), where our space perception is as unimaginable to us as the current one was to our Paleolithic ancestors. Internet, in particular, releases our mind from the physical space and gives us many more dimensions and points of view...

Type A showed a perception of man into nature, as a whole part of it, and vital space was organized accordingly, with very interesting and repeating structures called mámoas or túmulos, which are relatively big (depending on the time) tombs like small hills with dolmens or other funerary lithic in the core and originally covered with a stones shell. These served to mark the easier paths throughout a certain landscape and this has been verified in many places in NW corner of Spain and in Uruguay (cerritos).

On the other hand, the modern type B is of a domestication of space to our service... it started with agriculture in the Neolithic and is the one we have today...

Type C will probably see a very different space, customized to the individual in a manner never possible before... who knows?

I'm enjoying this thread a great deal, and I hope I made any sense..

thanks,

More articles, posted 19 May 2000 at 18:57 UTC by ping » (Master)

Here are some more interesting references to articles about dealing with the Singularity.

Yet more material, posted 19 May 2000 at 19:11 UTC by ping » (Master)

Here are couple of other examples of the thinking that has been going on about this topic.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page