Bill Joy thinks the world will end
February 17, 2001 6:02 PM Subscribe
Bill Joy thinks the world will end unless we stop doing certain kinds of research right now. I think Bill Joy is full of crap, but he has valid points. (More inside)
Self-replicating, intelligent robots will emerge in the laboratory, then merge with people, then, conceivably, subjugate and even replace the human species.
I've seen this too many times on Sci-Fi movies and shows. Sort of like in the 50s in which you had giant mutant ants taking over the world, all thanks to nuclear technology.
Technology is not evil, technology will not kill people; people, and their use of technology, will kill people. We have lived in the shadow of The Bomb for decades now, all the while benefitting from Nuclear energy. It will be exactly the same for the next n decades - only the technology will change.
posted by mkn at 6:31 PM on February 17, 2001
I've seen this too many times on Sci-Fi movies and shows. Sort of like in the 50s in which you had giant mutant ants taking over the world, all thanks to nuclear technology.
Technology is not evil, technology will not kill people; people, and their use of technology, will kill people. We have lived in the shadow of The Bomb for decades now, all the while benefitting from Nuclear energy. It will be exactly the same for the next n decades - only the technology will change.
posted by mkn at 6:31 PM on February 17, 2001
The right approach is to go ahead and develop the technology and get its benefits, and when someone tries to misuse it to deal with that then.
The problem with that being that the technologies in question are so powerful that misusing them, either deliberately or not, could end up having disastrous consquences.
posted by davidgentle at 8:06 PM on February 17, 2001
The problem with that being that the technologies in question are so powerful that misusing them, either deliberately or not, could end up having disastrous consquences.
posted by davidgentle at 8:06 PM on February 17, 2001
David, what do you think should be done, then?
posted by Steven Den Beste at 8:49 PM on February 17, 2001
posted by Steven Den Beste at 8:49 PM on February 17, 2001
Bill Joy started the decline of humanity when he created vi.
posted by holgate at 8:50 PM on February 17, 2001
posted by holgate at 8:50 PM on February 17, 2001
The only types of things I'm at all worried about a certain types of nanobots and (as the papers called it) super viruses used as weapons. You may notice a theme here, self replication. I don't like the idea of technology that can survive on it's own without humans, I think.
posted by holloway at 9:52 PM on February 17, 2001
posted by holloway at 9:52 PM on February 17, 2001
I give humans 200 - 300 years more existence, but our "extintion" won't happen the way Joy thinks it will. Once we can really manipulate the genetic code of humans for specific purposes - and this will happen, it CAN'T be stopped - we will "evolve" ourselves into a form - or more likely forms - that will be unrecognizeable as human. The "humans" of 200 years from now will be far different from the people of today than the people of today are from apes and chimpanzees.
I imagine genetics moving to permanently alter the species into several distinct subsets that probably won't even be able to interbreed with each other.
This won't have to be coerced. Future parents will choose genetic "enhancements" with the best of intentions, thier children will do the same (with newer and "better" traits to choose from), as will their children, and thier children...
Gattaga at the beginning, Brave New World at the middle, who knows what at the end.
I don't necessarily like this idea, but I don't see any way around it either. It could be slowed, sure. There may be trends and fads that surround the idea of "pure" genetic authenticity. But I think the only real thing that could derail this type of scenario would be a complete meltdown of our technological capabilities - war, disease, some sort of worldwide apocaliptic scene.
Genetics isn't just in its infancy as a science, its still embryonic. The level of change will outpace every other technological revolution so far, including the electrical revolution.
Things are going to get WEIRD!
posted by edlark at 10:52 PM on February 17, 2001
I imagine genetics moving to permanently alter the species into several distinct subsets that probably won't even be able to interbreed with each other.
This won't have to be coerced. Future parents will choose genetic "enhancements" with the best of intentions, thier children will do the same (with newer and "better" traits to choose from), as will their children, and thier children...
Gattaga at the beginning, Brave New World at the middle, who knows what at the end.
I don't necessarily like this idea, but I don't see any way around it either. It could be slowed, sure. There may be trends and fads that surround the idea of "pure" genetic authenticity. But I think the only real thing that could derail this type of scenario would be a complete meltdown of our technological capabilities - war, disease, some sort of worldwide apocaliptic scene.
Genetics isn't just in its infancy as a science, its still embryonic. The level of change will outpace every other technological revolution so far, including the electrical revolution.
Things are going to get WEIRD!
posted by edlark at 10:52 PM on February 17, 2001
Heh. Holgate, that's awesome. My thought when I read this story exactly! ;-)
But seriously, I think not only is Joy right on, but he's failed to consider (at least publicly) a deeper downside here. There's this sort of "ratcheting effect" of the transformative mind going on and it's going to get far far worse over time. If you look at the mind as an instrument of transformation which, as it gets more powerful, is increasingly capable of reflecting itself onto reality... then the problem becomes humanity progressing towards either death or the ultimate in self-imprisonment.
In other words, as the ability of our minds to manipulate the universe increases (probably exponentially), IF mankind is to survive, that power must be equally opposed by some force that will prevent some sort of fatal transformation. Therefore, it will become necessary to have greater and greater control of information first and individuals second. As Joy points out, fewer and fewer people will be trusted with scientific knowledge.
But here's the catch... if the mind-blade gets *too* sharp, then NOBODY can be trusted with certain kinds of knowledge, or perhaps if a breakthrough lies in some obvious direction then nobody can be trusted PERIOD! So if mankind is going to survive, not only will future generations have to contain the power of *existing* knowledge, but they will also have to know everything about what anyone might be *learning* or *thinking* as well. This situation is much darker than the situation Joy is talking about... because it looks to be ENTIRELY UNAVOIDABLE. The smarter we get, the more we're going to have to increase policing and surveillance to ensure our own survival.
So with the forces set up the way they are, Mankind will either die in Bill Joy's apocalypse or enter the Brave New World of total surveillance and thought control. Neither fate looks particularly good and both underscore the fact that we are all prisoners of our own minds.
The only way "out" for future generations would be a truly fundamental transformation in the spiritual nature of our minds (i.e., everyone becomes "enlightened"). There's certainly no precedent for any trend like that, but nothing's *impossible*...
posted by muppetboy at 11:20 PM on February 17, 2001
But seriously, I think not only is Joy right on, but he's failed to consider (at least publicly) a deeper downside here. There's this sort of "ratcheting effect" of the transformative mind going on and it's going to get far far worse over time. If you look at the mind as an instrument of transformation which, as it gets more powerful, is increasingly capable of reflecting itself onto reality... then the problem becomes humanity progressing towards either death or the ultimate in self-imprisonment.
In other words, as the ability of our minds to manipulate the universe increases (probably exponentially), IF mankind is to survive, that power must be equally opposed by some force that will prevent some sort of fatal transformation. Therefore, it will become necessary to have greater and greater control of information first and individuals second. As Joy points out, fewer and fewer people will be trusted with scientific knowledge.
But here's the catch... if the mind-blade gets *too* sharp, then NOBODY can be trusted with certain kinds of knowledge, or perhaps if a breakthrough lies in some obvious direction then nobody can be trusted PERIOD! So if mankind is going to survive, not only will future generations have to contain the power of *existing* knowledge, but they will also have to know everything about what anyone might be *learning* or *thinking* as well. This situation is much darker than the situation Joy is talking about... because it looks to be ENTIRELY UNAVOIDABLE. The smarter we get, the more we're going to have to increase policing and surveillance to ensure our own survival.
So with the forces set up the way they are, Mankind will either die in Bill Joy's apocalypse or enter the Brave New World of total surveillance and thought control. Neither fate looks particularly good and both underscore the fact that we are all prisoners of our own minds.
The only way "out" for future generations would be a truly fundamental transformation in the spiritual nature of our minds (i.e., everyone becomes "enlightened"). There's certainly no precedent for any trend like that, but nothing's *impossible*...
posted by muppetboy at 11:20 PM on February 17, 2001
The problem with that being that the technologies in question are so powerful that misusing them, either deliberately or not, could end up having disastrous consquences.
Not at all like the benign technologies of the past (e.g., nuclear weapons).
posted by gleemax at 11:37 PM on February 17, 2001
Not at all like the benign technologies of the past (e.g., nuclear weapons).
posted by gleemax at 11:37 PM on February 17, 2001
"The sky is falling, the sky is falling."
If it is indeed the case that technology-driven catastrophe is the inescapable consequence of human scientific endeavor, (and I don't believe it is), then the surest remedy is for us to understand the technology as well as we can, so that we might work to counteract it. Steven's point is a good one--science will progress. It's up to people of character and intellect to stay on top of it.
posted by shylock at 1:15 AM on February 18, 2001
If it is indeed the case that technology-driven catastrophe is the inescapable consequence of human scientific endeavor, (and I don't believe it is), then the surest remedy is for us to understand the technology as well as we can, so that we might work to counteract it. Steven's point is a good one--science will progress. It's up to people of character and intellect to stay on top of it.
posted by shylock at 1:15 AM on February 18, 2001
I'm not sure I really grasp why the transformation of humanity into something unrecognizable to modern man is supposed to be a bad thing.
I don't believe our tools will kill us. I think the distinction between us and our tools is going to disappear.
I don't think our humanity is bound up in our cells, in our meat and marrow; I think that if our descendents have buckytubes for bones and can spread their nanoskin into solar sails and drift to the stars -- they'll still dream, and love and hate, and make art.
Humanity looks to me right now like a terrified caterpillar, having nightmares of butterflies.
posted by webmutant at 3:40 AM on February 18, 2001
I don't believe our tools will kill us. I think the distinction between us and our tools is going to disappear.
I don't think our humanity is bound up in our cells, in our meat and marrow; I think that if our descendents have buckytubes for bones and can spread their nanoskin into solar sails and drift to the stars -- they'll still dream, and love and hate, and make art.
Humanity looks to me right now like a terrified caterpillar, having nightmares of butterflies.
posted by webmutant at 3:40 AM on February 18, 2001
I'd already written a response to this on my page when I saw the MiFi thread, oh well. It's here if you want it.
The short version: I don't think what Joy wants is even possible. Technology moves forward and eventually we WILL know these things that scare him so much. The choice is not IF the research will happen, but if it will happen in the public eye, reviewed by other scientists and regulated by laws.
posted by Nothing at 3:50 AM on February 18, 2001
The short version: I don't think what Joy wants is even possible. Technology moves forward and eventually we WILL know these things that scare him so much. The choice is not IF the research will happen, but if it will happen in the public eye, reviewed by other scientists and regulated by laws.
posted by Nothing at 3:50 AM on February 18, 2001
I thought Jaron Lanier had the best response to Joy's concern. He critiques what he calls 'cybernetic totalism' and I think he's right in many ways.
If we're cybernetic totalists, then Joy will worry us. If not, we'll see things differently.
At first I was worried, too. Then I learned to relax and love the bomb.
But seriously, I'm still in favor of caution, lest our ability outstrip our ethical capacity, ie, we're far more advanced in technology than we are as a society that can have ethical discussions and arrive at some kind of acceptable ethical consensus about these things.
posted by Sean Meade at 5:45 AM on February 18, 2001
If we're cybernetic totalists, then Joy will worry us. If not, we'll see things differently.
At first I was worried, too. Then I learned to relax and love the bomb.
But seriously, I'm still in favor of caution, lest our ability outstrip our ethical capacity, ie, we're far more advanced in technology than we are as a society that can have ethical discussions and arrive at some kind of acceptable ethical consensus about these things.
posted by Sean Meade at 5:45 AM on February 18, 2001
What we need is total surveillance by an all-powerful and benevolent leader who is not corruptible. I volunteer.
posted by quirked at 7:23 AM on February 18, 2001
posted by quirked at 7:23 AM on February 18, 2001
Forget it, quirked, I'm not ready to retire yet...
The thing that I find interesting in this debate from a philosophic standpoint is the assumption that the current structure, attributes and skills of homo sapiens are simply the be-all-end-all to the evolution of our species. It's easy to imagine Bill the Neanderthal: "No, Urk! No! Fire bad! Fire bad! Many manchilds burned!" Just because we think we know how evolution works now is not a reason to believe we should be directing evolution now.
posted by m.polo at 10:14 AM on February 18, 2001
The thing that I find interesting in this debate from a philosophic standpoint is the assumption that the current structure, attributes and skills of homo sapiens are simply the be-all-end-all to the evolution of our species. It's easy to imagine Bill the Neanderthal: "No, Urk! No! Fire bad! Fire bad! Many manchilds burned!" Just because we think we know how evolution works now is not a reason to believe we should be directing evolution now.
posted by m.polo at 10:14 AM on February 18, 2001
Steven: I don't start from the asssumption that our extinction would be a bad thing. We should take responsibility for our actions. If we are going to do stuff that brings about the end of all things then we should at least be aware that it might happen and be able to deal with it. Also: just because something is the end of us doesn't mean it isn't the beginning of something else.
Gleemax: Nuclear weopons are very powerful in a given situation but it's fairly easy to stop one from going off. the problems that Joy is poitning out are self replicating. The whole point about nanobots is there abiliy to replicate. without that they are useless but with it they are dangerous.
Okay, questions: How much technology do we need? What what are these technologies going to do for us that they we can't already do? What is the point of a continuing march toward "greater" technology if it does nothing for us?
posted by davidgentle at 2:57 PM on February 18, 2001
Gleemax: Nuclear weopons are very powerful in a given situation but it's fairly easy to stop one from going off. the problems that Joy is poitning out are self replicating. The whole point about nanobots is there abiliy to replicate. without that they are useless but with it they are dangerous.
Okay, questions: How much technology do we need? What what are these technologies going to do for us that they we can't already do? What is the point of a continuing march toward "greater" technology if it does nothing for us?
posted by davidgentle at 2:57 PM on February 18, 2001
I think Bill Joy is full of crap, but he has valid points.
This would seem to mean that he is not entirely full of crap. Let's say, approximately, half-full of crap. Which, when viewed from a different angle, would be half-empty of crap. And when we weigh this against nearly infinite Badness of Consequences, makes it seem like maybe we should at least *consider* the possibility that he is right.
posted by rodii at 8:37 PM on February 18, 2001
This would seem to mean that he is not entirely full of crap. Let's say, approximately, half-full of crap. Which, when viewed from a different angle, would be half-empty of crap. And when we weigh this against nearly infinite Badness of Consequences, makes it seem like maybe we should at least *consider* the possibility that he is right.
posted by rodii at 8:37 PM on February 18, 2001
What what are these technologies going to do for us that they we can't already do? What is the point of a continuing march toward "greater" technology if it does nothing for us?
It'll be cool.
posted by kindall at 8:40 PM on February 18, 2001
It'll be cool.
posted by kindall at 8:40 PM on February 18, 2001
David, it's never possible to predict what a really new technology will do for us. When the first laser was made in the 1950's, could they have predicted CDs or laser scalpels in surgery? When the transistor was created, did they have any idea what would happen with LSI? Could anyone in 1953 have predicted "Unreal Tournament"? Could Marconi have predicted the cell phone or radar?
The one thing we can be utterly certain of when a brand new field (like computers or genetic engineering) really hits its stride is that it will never cease to amaze us. Any concept that it won't create really new and very useful things should be dispensed with right now. They'll happen; it's just that we can't predict what they are, and indeed may not recognize them immediately even when they happen.
That's particularly the case when you're dealing with things as obviously versatile as computers or genetics.
posted by Steven Den Beste at 9:46 PM on February 18, 2001
The one thing we can be utterly certain of when a brand new field (like computers or genetic engineering) really hits its stride is that it will never cease to amaze us. Any concept that it won't create really new and very useful things should be dispensed with right now. They'll happen; it's just that we can't predict what they are, and indeed may not recognize them immediately even when they happen.
That's particularly the case when you're dealing with things as obviously versatile as computers or genetics.
posted by Steven Den Beste at 9:46 PM on February 18, 2001
Steven: I'm not saying that we shouldn't pursue new technology. But we should be damn suer that what we are doing is going to be worthwhile if it's hugely risky.
Personally I think that, assuming it's not controlled by selfinterested polycrops, nanotech will be very interesting and useful. but to compare it to computers is missing the point. No one could predict a point when computers would actually be a bad thing. It involves a fair old amount of schlocky scifi to think of a circumstance where computers are going to destroy us all. But with nanotech a tiny mistake could just wipe everything out. it's not difficult to imagine how that could happen. maybe with extreme safeguards it's worth doing, but the attitude of some people, that we should go ahead and do it because we're human and it's our job to do this stuff, strikes me as rather irresponsible.
Kindall: Well, yeah...for the eigteen tentacled necktie creatures that we spawn.
posted by davidgentle at 8:14 PM on February 19, 2001
Personally I think that, assuming it's not controlled by selfinterested polycrops, nanotech will be very interesting and useful. but to compare it to computers is missing the point. No one could predict a point when computers would actually be a bad thing. It involves a fair old amount of schlocky scifi to think of a circumstance where computers are going to destroy us all. But with nanotech a tiny mistake could just wipe everything out. it's not difficult to imagine how that could happen. maybe with extreme safeguards it's worth doing, but the attitude of some people, that we should go ahead and do it because we're human and it's our job to do this stuff, strikes me as rather irresponsible.
Kindall: Well, yeah...for the eigteen tentacled necktie creatures that we spawn.
posted by davidgentle at 8:14 PM on February 19, 2001
There simply isn't any way to predict whether something will be useful until after the fact, David.
posted by Steven Den Beste at 8:34 PM on February 19, 2001
posted by Steven Den Beste at 8:34 PM on February 19, 2001
OKay then. Lets take a notional technology that involves disecting live children. It may, at some point have alarming benefits for us all. Or it may kill us all. Should we do it?
posted by davidgentle at 4:16 PM on February 20, 2001
posted by davidgentle at 4:16 PM on February 20, 2001
It probably doesn't matter what people think should be done because "people" don't have the power to organize around ANYTHING at 6+ billion...
posted by muppetboy at 11:34 PM on February 20, 2001
posted by muppetboy at 11:34 PM on February 20, 2001
Lets take a notional technology that involves disecting live children. ... Should we do it?
Flip answer: It depends. Which children, specifically? If it's the Olsen twins or Macauley Culkin, I say go right ahead.
Serious answer: So far as I know none of the technologies under discussion have an obvious, immediate moral problem that would prevent them from being developed. Your analogy is faulty because your hypothetical technology is front-loaded with evil (well, "harm" is probably more accurate; I just thought the phrase "front-loaded with evil" looked pretty cool) in a way that, say, nanotechnology is not. It doesn't really map to anything in this discussion and doesn't strike me as very useful to understanding the issues involved.
posted by kindall at 12:10 AM on February 21, 2001
Flip answer: It depends. Which children, specifically? If it's the Olsen twins or Macauley Culkin, I say go right ahead.
Serious answer: So far as I know none of the technologies under discussion have an obvious, immediate moral problem that would prevent them from being developed. Your analogy is faulty because your hypothetical technology is front-loaded with evil (well, "harm" is probably more accurate; I just thought the phrase "front-loaded with evil" looked pretty cool) in a way that, say, nanotechnology is not. It doesn't really map to anything in this discussion and doesn't strike me as very useful to understanding the issues involved.
posted by kindall at 12:10 AM on February 21, 2001
Here's the essential question I'm asking:
Should we pursue technology at whatever cost?
Well...should we?
posted by davidgentle at 7:45 PM on February 21, 2001
Should we pursue technology at whatever cost?
Well...should we?
posted by davidgentle at 7:45 PM on February 21, 2001
Here's the essential question I'm asking:
Should we pursue technology at whatever cost?
Well...should we?
----
And here's my essential point... the question makes no sense in the end, because there's no meaningful "we" in the world to make such a decision. Technology will be pursued by the chaotic path of the sum of the world's intelligences until the forces that drive all these decision themselves change. To suggest that we can decide whether or not technology will be pursued is very much like asking what kind of weather we should decide to have tomorrow.
posted by muppetboy at 1:06 PM on February 22, 2001
Should we pursue technology at whatever cost?
Well...should we?
----
And here's my essential point... the question makes no sense in the end, because there's no meaningful "we" in the world to make such a decision. Technology will be pursued by the chaotic path of the sum of the world's intelligences until the forces that drive all these decision themselves change. To suggest that we can decide whether or not technology will be pursued is very much like asking what kind of weather we should decide to have tomorrow.
posted by muppetboy at 1:06 PM on February 22, 2001
Muppetboy: I take your point. But this whole debate is predicated on the assumption (Joy's assumption that is) that we can. I agree that it's probably a false assumtion though.
posted by davidgentle at 8:33 PM on February 22, 2001
posted by davidgentle at 8:33 PM on February 22, 2001
I'd say that the only obvious way to achieve success is through a fundamental transformation in the forces which are driving the process. At some point, mankind has to stop looking to transform the *external* world and start to transform his *internal* world. To make a crude Physics metaphor... policy decisions don't have sufficient leverage over the problem to have any effect on the long term course of events. So the question, if mankind really wants to succeed is... do we have the capacity to reframe ourselves? To evolve memetically in a way that will avert the obvious coming disaster? The only other reasonable option for survival is for man to partly escape himself by escaping spaceship Earth...
posted by muppetboy at 11:29 PM on February 22, 2001
posted by muppetboy at 11:29 PM on February 22, 2001
« Older | Another laughable graphic from CNN.com, Newer »
This thread has been archived and is closed to new comments
Are there any obvious upcoming technologies which should be preemptively suppressed? Joy thinks we should stop all genetic engineering, for instance, but I see the benefits of improved food crops and cures for diseases like cystic fibrosis, and I have to wonder if Joy would feel the same way if it was his kid who was sick, or if he was starving. It's easy to condemn someone you don't know to a horrible death.
And what about constitutional issues, for us Americans and Canadians (who have powerful constitutions)? Are we even legally capable of suppressing privately-financed research?
posted by Steven Den Beste at 6:08 PM on February 17, 2001