Baez/Yudkowsky
April 2, 2011 8:43 PM Subscribe
John Baez (mathematical physicist and master popularizer, former operator of This Week's Finds in Mathematical Physics, current promoter of the idea that physicists should start pitching in on saving the world) interviews Eliezer Yudkowsky (singularitarian, author of "Harry Potter and the Methods of Rationality," promoter of the idea that human life faces a near-term existential threat from unfriendly artificial intelligence, and that people can live better lives by evading their cognitive biases) about the future, academia, rationality, altruism, expected utility, self-improvement by humans and machines, and the relative merit of battling climate change and developing friendly AIs that will forstall our otherwise inevitable doom. Part I. Part II. Part III.
To give the flavor, here's Yudkowsky: "My long-term goals are the same as ever: I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI. But that’s a bit of a large project for one person, and after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity and the affect heuristic and the concept of marginal expected utility, so they can see why the intuitively more appealing option is the wrong one."
To give the flavor, here's Yudkowsky: "My long-term goals are the same as ever: I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI. But that’s a bit of a large project for one person, and after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity and the affect heuristic and the concept of marginal expected utility, so they can see why the intuitively more appealing option is the wrong one."
grave problems with singularitarians
Grave problems, or 'grave' problems? I'll get my coat.
posted by topynate at 9:24 PM on April 2, 2011
Grave problems, or 'grave' problems? I'll get my coat.
posted by topynate at 9:24 PM on April 2, 2011
Is this where I sign up for the Butlerian Jihad?
posted by JHarris at 9:41 PM on April 2, 2011 [2 favorites]
posted by JHarris at 9:41 PM on April 2, 2011 [2 favorites]
I started reading Harry Potter and the Methods of Rationality once and it drove me crazy. The book's Harry Potter doesn't practice rationality, he practices empiricism. Retitle the book and make the appropriate changes and I'll consider trying again.
posted by If only I had a penguin... at 9:42 PM on April 2, 2011 [1 favorite]
posted by If only I had a penguin... at 9:42 PM on April 2, 2011 [1 favorite]
It took me a long time to realize it wasn't Joan Baez. After that, it began to make more sense.
posted by merelyglib at 10:35 PM on April 2, 2011 [2 favorites]
posted by merelyglib at 10:35 PM on April 2, 2011 [2 favorites]
It took me a long time to realize it wasn't Joan Baez.
Well, tomorrow is a long time, after all.
god, Dylan's original is so much better...
posted by flapjax at midnite at 10:53 PM on April 2, 2011 [1 favorite]
Well, tomorrow is a long time, after all.
god, Dylan's original is so much better...
posted by flapjax at midnite at 10:53 PM on April 2, 2011 [1 favorite]
The book's Harry Potter doesn't practice rationality, he practices empiricism.
It's not called "Harry Potter and the Methods of Rationality" because Harry is a prime example of a perfectly rational being, any more than "Harry Potter and the Sorcerer's Stone" is called that because Harry is an amazingly talented alchemist. You might want to try reading a few more chapters in to see where the larger plot arc goes.
posted by teraflop at 11:12 PM on April 2, 2011
It's not called "Harry Potter and the Methods of Rationality" because Harry is a prime example of a perfectly rational being, any more than "Harry Potter and the Sorcerer's Stone" is called that because Harry is an amazingly talented alchemist. You might want to try reading a few more chapters in to see where the larger plot arc goes.
posted by teraflop at 11:12 PM on April 2, 2011
There is nothing, and I mean nothing that Yudkowsky could do better to promote his worldview than write more Harry Potter fanfiction.
That this is true is extraordinarily strange.
posted by effugas at 11:32 PM on April 2, 2011 [7 favorites]
That this is true is extraordinarily strange.
posted by effugas at 11:32 PM on April 2, 2011 [7 favorites]
Are we going to see any more of Harry Potter and the Methods of Rationality or is it ending with chapter 70?
posted by Joe in Australia at 11:34 PM on April 2, 2011
posted by Joe in Australia at 11:34 PM on April 2, 2011
And in hopes that this thread won't completely derail, I'll point out that Yudkowsky has written a bunch of non-Harry Potter stuff as well. I'll suggest the Twelve Virtues of Rationality, The Simple Truth and The Sword of Good. He also has a bunch of posts on the collaborative blog Less Wrong.
posted by teraflop at 11:38 PM on April 2, 2011 [1 favorite]
posted by teraflop at 11:38 PM on April 2, 2011 [1 favorite]
Really good reading, so far. I fell hard for HP:MoR, and went on to read some Less Wrong, and a bit of Yudkowski, and a LOT of Vinge and Stross and Doctorow. I... don't think I have my priorities straight, but I'm having a hell of a good time.
posted by cthuljew at 1:34 AM on April 3, 2011
posted by cthuljew at 1:34 AM on April 3, 2011
It took me a long time to realize it wasn't Joan Baez.
Well, fun fact: John Baez is related to Joan, they are cousins.
posted by xtine at 1:44 AM on April 3, 2011
Well, fun fact: John Baez is related to Joan, they are cousins.
posted by xtine at 1:44 AM on April 3, 2011
read the sword of good, you know, meh, like i'm pretty sure I was playing d&d at 13 that was more interesting.
posted by Shit Parade at 1:45 AM on April 3, 2011
posted by Shit Parade at 1:45 AM on April 3, 2011
"Yukdowsky believes that an intelligence explosion could threaten everything we hold dear unless the first self-amplifying intelligence is "friendly". The challenge, then, is to design “friendly AI”. And this requires understanding a lot more than we currently do about intelligence, goal-driven behavior, rationality and ethics—and of course what it means to be “friendly”.
If you want to know what the future looks like, imagine a bunch of networked, hyper-intelligent computers debating Socrates all day.
(Let's just hope they never get any Nietzsche or Ayn Rand in their databanks, or we're all doomed...)
posted by markkraft at 2:38 AM on April 3, 2011
If you want to know what the future looks like, imagine a bunch of networked, hyper-intelligent computers debating Socrates all day.
(Let's just hope they never get any Nietzsche or Ayn Rand in their databanks, or we're all doomed...)
posted by markkraft at 2:38 AM on April 3, 2011
(Let's just hope they never get any Nietzsche or Ayn Rand in their databanks, or we're all doomed...)
The internet treats Ayn Rand as damage and routes around it.
posted by atrazine at 4:28 AM on April 3, 2011 [9 favorites]
The internet treats Ayn Rand as damage and routes around it.
posted by atrazine at 4:28 AM on April 3, 2011 [9 favorites]
Also, Yudkowsky is everything that is right about wild eyed singulitarian futurists. I disagree with an awful lot he says but we would be well served as a species if we had a 1000 more like that.
posted by atrazine at 4:30 AM on April 3, 2011
posted by atrazine at 4:30 AM on April 3, 2011
As a singulitarian I will simply state that his provably friendly AI goal is unrealistic in implementation.
Not from an "AI is impossible" standpoint mind you, I am all for that, but from a guarantee that an autonomous learning system with greater intelligence than humanity will retain the goals of humanity standpoint. Yudowski essentially thinks he is so smart that he will power his way to a solution by learning the maths he is not familiar with and integrating them into this golden world changing utilitarian calculus that will form the seed A.I. goal set which of course it will never change!
I think he is too smart to really believe it though - it's really difficult to get funding for explicit general AI simply because of the existential risk involved. So the middle ground would be what they are doing with the friendly AI push, however it is disingenuous in my view.
The far more noteworthy Ben Goertzel agrees.
On a personal note, he has way too big of an ego for his accomplishments thus far in my opinion. Just check out the talk page on his wikipedia entry.
posted by AndrewKemendo at 5:00 AM on April 3, 2011 [2 favorites]
Not from an "AI is impossible" standpoint mind you, I am all for that, but from a guarantee that an autonomous learning system with greater intelligence than humanity will retain the goals of humanity standpoint. Yudowski essentially thinks he is so smart that he will power his way to a solution by learning the maths he is not familiar with and integrating them into this golden world changing utilitarian calculus that will form the seed A.I. goal set which of course it will never change!
I think he is too smart to really believe it though - it's really difficult to get funding for explicit general AI simply because of the existential risk involved. So the middle ground would be what they are doing with the friendly AI push, however it is disingenuous in my view.
The far more noteworthy Ben Goertzel agrees.
On a personal note, he has way too big of an ego for his accomplishments thus far in my opinion. Just check out the talk page on his wikipedia entry.
posted by AndrewKemendo at 5:00 AM on April 3, 2011 [2 favorites]
physicists should start pitching in on saving the world
The idea of academics working on saving the world (though not necessarily through propagating humanity across space) is a good one. I've often thought of getting a community of mathematicians together (because that's what I am, but it could be chemists, writers, or everyone mixed together) to work on a different big issue every, say, five years. Everyone talks about what open problems there are in something like cancer research, the environment, or whatever and devotes a not-insignificant amount of time and effort working on those problems. It would create a lot of new relationships and lots of publishable material, which would make the academics happy, and every now and then something useful might pop up. I know a lot of people work on these things already, but I for one have a lot of trouble knowing how the heck my little subfield could be useful. Getting people together from all different areas would surely bring a lot of problems and hopefully solutions to light.
posted by monkeymadness at 5:18 AM on April 3, 2011 [1 favorite]
The idea of academics working on saving the world (though not necessarily through propagating humanity across space) is a good one. I've often thought of getting a community of mathematicians together (because that's what I am, but it could be chemists, writers, or everyone mixed together) to work on a different big issue every, say, five years. Everyone talks about what open problems there are in something like cancer research, the environment, or whatever and devotes a not-insignificant amount of time and effort working on those problems. It would create a lot of new relationships and lots of publishable material, which would make the academics happy, and every now and then something useful might pop up. I know a lot of people work on these things already, but I for one have a lot of trouble knowing how the heck my little subfield could be useful. Getting people together from all different areas would surely bring a lot of problems and hopefully solutions to light.
posted by monkeymadness at 5:18 AM on April 3, 2011 [1 favorite]
I never got into the original Harry Potter enough to appreciate Eliezer's fan version, but I do have my own favorite of his writings: Three Worlds Collide
posted by localroger at 5:56 AM on April 3, 2011
posted by localroger at 5:56 AM on April 3, 2011
In The Atrocity Archives, (mefi's own) cstross has one of his characters mention that there are people at the Pentagon working on anti-matter weapons. Whether or not this is true, it does feel a bit like it. It's important, should it ever happen. Important enough to spend some serious time and energy on it. And we'll probably get AI before anti-matter weapons. But they do fall in the same category for me. We talk about AI when we haven't even defined (really) intelligence, consciousness, self-awareness and a host of other neurological and psychological concepts. Quite possibly the best understood network of neurons is the Stomatogastric nervous system, a system of only 30 neurons. They've been studying it for 30 years and they just about think they've cracked it. It controls certain aspects of digestion in arthropods.
Computing power != intelligence, otherwise we'd have reached the Dial F for Frankenstein point ages ago (well, over 30).
So I guess the idea of working on "friendly AI" is interesting, but it's about as practical right now as defense systems against anti-matter weapons.
Yudowsky also ignores something in his dismissal of environmental action. From a purely human point of view, our society is powered by fossil fuels. The race to replace those before we cannot cheaply extract them is of paramount importance for enabling humanity to continue to expand the reach of its intelligence. Dismissing all environmentalism as the "save puppies from rare diseases" is disingenuous and betrays a large amount of cognitive bias, one which, given his stated goals, seems to either be a blind spot or intentional.
I agree with Yudkowsky on many things, but every time I read something he's written, I find myself wanting to be on the other side. From the bits and pieces I've read on Less Wrong, I think it's due to his insistence on discounting not only wild objections, but also well thought out ones as well. It's slightly reminiscent of the objectivist approach: I'm perfectly rational, therefore, if we disagree, it must be because of your irrationality. No matter how much I may agree with your point, that position is going to drive me away instantly.
posted by Hactar at 6:18 AM on April 3, 2011 [1 favorite]
Computing power != intelligence, otherwise we'd have reached the Dial F for Frankenstein point ages ago (well, over 30).
So I guess the idea of working on "friendly AI" is interesting, but it's about as practical right now as defense systems against anti-matter weapons.
Yudowsky also ignores something in his dismissal of environmental action. From a purely human point of view, our society is powered by fossil fuels. The race to replace those before we cannot cheaply extract them is of paramount importance for enabling humanity to continue to expand the reach of its intelligence. Dismissing all environmentalism as the "save puppies from rare diseases" is disingenuous and betrays a large amount of cognitive bias, one which, given his stated goals, seems to either be a blind spot or intentional.
I agree with Yudkowsky on many things, but every time I read something he's written, I find myself wanting to be on the other side. From the bits and pieces I've read on Less Wrong, I think it's due to his insistence on discounting not only wild objections, but also well thought out ones as well. It's slightly reminiscent of the objectivist approach: I'm perfectly rational, therefore, if we disagree, it must be because of your irrationality. No matter how much I may agree with your point, that position is going to drive me away instantly.
posted by Hactar at 6:18 AM on April 3, 2011 [1 favorite]
Yudowsky also ignores something in his dismissal of environmental action. From a purely human point of view, our society is powered by fossil fuels. The race to replace those before we cannot cheaply extract them is of paramount importance for enabling humanity to continue to expand the reach of its intelligence. Dismissing all environmentalism as the "save puppies from rare diseases" is disingenuous and betrays a large amount of cognitive bias, one which, given his stated goals, seems to either be a blind spot or intentional.
I think you must've missed or misread something--Yudkowsky does not argue that saving the environment is without worth. Instead, he argues that it is an important problem to which his contribution would be a drop in the bucket. On the other hand, the creation of friendly AI is something to which he feels he can contribute substantially.
He's trying to maximize his expected marginal utility, in his words, and also trying to exploit a comparative advantage by working on a problem for which he is suited.
posted by TypographicalError at 7:17 AM on April 3, 2011 [1 favorite]
I think you must've missed or misread something--Yudkowsky does not argue that saving the environment is without worth. Instead, he argues that it is an important problem to which his contribution would be a drop in the bucket. On the other hand, the creation of friendly AI is something to which he feels he can contribute substantially.
He's trying to maximize his expected marginal utility, in his words, and also trying to exploit a comparative advantage by working on a problem for which he is suited.
posted by TypographicalError at 7:17 AM on April 3, 2011 [1 favorite]
No.
I'm afraid of being revealed as a dill by spelling people's names wrong.
posted by flabdablet at 7:48 AM on April 3, 2011
I'm afraid of being revealed as a dill by spelling people's names wrong.
posted by flabdablet at 7:48 AM on April 3, 2011
Dismissing all environmentalism as the "save puppies from rare diseases" is disingenuous
What makes you think he is really talking about environmentalism here? In fact, he would probably agree that the environmentalism movement as a whole also suffers from too much attention/money being directed toward specific, cater-to-the-human-brain causes that are inefficient or worse. When I heard him speak, he mentioned a study that polled (1000ish) people and found that they would on average give around $80 to save 2,000 birds, about $87 to save 20,000 birds, and only around $79 to save 200,000 birds. So he was explaining how these ideas are relevant to all movements.
posted by milestogo at 7:59 AM on April 3, 2011
What makes you think he is really talking about environmentalism here? In fact, he would probably agree that the environmentalism movement as a whole also suffers from too much attention/money being directed toward specific, cater-to-the-human-brain causes that are inefficient or worse. When I heard him speak, he mentioned a study that polled (1000ish) people and found that they would on average give around $80 to save 2,000 birds, about $87 to save 20,000 birds, and only around $79 to save 200,000 birds. So he was explaining how these ideas are relevant to all movements.
posted by milestogo at 7:59 AM on April 3, 2011
Hmmmm....this is maybe interesting, these ideas. Looking over Yudowsky's discussion of the "universe of possible minds" ....is there any literature among the singularityians which address the will to survive? Dawkins' selfish gene, in other words --- the supposition that evolution is driven by the perpetuation of genes. Therefore the workings of every extant mind on this planet are bound up with the desire to survive long enough to reproduce.
Yet the "friendly AI" Yudowsky seems to dream of creating seems to be independent of this, not tied up with it at all. But I wonder --- if any mind is tied to a physical being, and aware of that tie, must it not attempt to maximize the survival of its vessel, to prolong its own existence? Can one conceive of a being indifferent to its own existence? Even if something is merely programmed to monitor its own survivability, if it can act to perpetuate that survival, will it not tend to, even if there are circumstances in which it would prioritize other goals?
Because if it cares about whether it continues to exist and can act to perpetuate its existence --- then the problem of competition for scare resources exists, is inevitable.
This is a rather simple chain of reasoning, so I assume there must be something out there among the math heads that addresses it....
posted by Diablevert at 8:05 AM on April 3, 2011
Yet the "friendly AI" Yudowsky seems to dream of creating seems to be independent of this, not tied up with it at all. But I wonder --- if any mind is tied to a physical being, and aware of that tie, must it not attempt to maximize the survival of its vessel, to prolong its own existence? Can one conceive of a being indifferent to its own existence? Even if something is merely programmed to monitor its own survivability, if it can act to perpetuate that survival, will it not tend to, even if there are circumstances in which it would prioritize other goals?
Because if it cares about whether it continues to exist and can act to perpetuate its existence --- then the problem of competition for scare resources exists, is inevitable.
This is a rather simple chain of reasoning, so I assume there must be something out there among the math heads that addresses it....
posted by Diablevert at 8:05 AM on April 3, 2011
Diablevert, you are mostly correct, and this is a big problem. A paper that gets referred to a lot in Friendly AI discussions is Stephen Omohundro's The Basic AI Drives, which posits that sufficiently capable AIs will tend to share certain drives. Friendly AI will necessarily involve understanding how utility affects action well enough to engineer a mind in which these drives do not act to defeat the objectives of Friendly AI.
What is frequently misunderstood though is why we should expect these drives to arise. Take self-preservation for example. Nothing desires to preserve itself simply because it is aware of its own physical existence – that's the ought-is distinction coming into play. Nevertheless, we expect that as a rule, intelligent agents will act to preserve themselves, because:
1. Intelligent agents have preferences.
2. Intelligent agents can act to attain those preferences. Therefore
3. All else being equal, a universe without an intelligent agent that has specific preferences will be less likely to satisfy those preferences. Therefore
4. An intelligent agent will tend to preserve itself in order to satisfy its preferences, whatever those preferences might be.
So, to take the archetypal example, imagine an AI that wants to maximise the number of paperclips in the universe. Such an AI will not just post itself to a paperclip factory to be melted into paperclips, because that doesn't create as many paperclips as other actions it might take.
posted by topynate at 8:48 AM on April 3, 2011
What is frequently misunderstood though is why we should expect these drives to arise. Take self-preservation for example. Nothing desires to preserve itself simply because it is aware of its own physical existence – that's the ought-is distinction coming into play. Nevertheless, we expect that as a rule, intelligent agents will act to preserve themselves, because:
1. Intelligent agents have preferences.
2. Intelligent agents can act to attain those preferences. Therefore
3. All else being equal, a universe without an intelligent agent that has specific preferences will be less likely to satisfy those preferences. Therefore
4. An intelligent agent will tend to preserve itself in order to satisfy its preferences, whatever those preferences might be.
So, to take the archetypal example, imagine an AI that wants to maximise the number of paperclips in the universe. Such an AI will not just post itself to a paperclip factory to be melted into paperclips, because that doesn't create as many paperclips as other actions it might take.
posted by topynate at 8:48 AM on April 3, 2011
topynate, I think that take on self-preservation is a very carefully structured misreading of how instinct works. Humans aren't afraid to die because their goals will no longer be serviced. Humans are afraid to die because they're afraid of not being alive any more.
To put this in the context of your argument, I'd add: How does the paperclip-philic AI know how many paperclips are in the universe? I'd say that it observes the universe, and gets a positive feedback of the sort we would call "pleasure" from the presence of paperclips in its observation. This is what sets the AI off making more paperclips.
Now, the reason the AI does not have itself made into paperclips is that, if it were to do that, it would not be able to observe the result. Death is a road that, for many of us, passes through a lot of pain to get to a place where there is no more pleasure. Life is not just more familiar, but much more desirable than such an outcome. (Heroic exceptions noted, and noting that such exceptions are considered heroic because they are unusual.)
The human equivalent of the AI having itself made into paperclips would be the trope you sometimes see in SF where the family member walks into the organ bank and sells their own entire body lock stock and barrel with the proceeds going to lift their family out of poverty. If we were motivated by servicing of goals, this would actually be a sensible thing for the individual who is leaving agents behind in a much better position to advance the family's goals. Instead, though, we regard it as horrific.
posted by localroger at 10:21 AM on April 3, 2011 [2 favorites]
To put this in the context of your argument, I'd add: How does the paperclip-philic AI know how many paperclips are in the universe? I'd say that it observes the universe, and gets a positive feedback of the sort we would call "pleasure" from the presence of paperclips in its observation. This is what sets the AI off making more paperclips.
Now, the reason the AI does not have itself made into paperclips is that, if it were to do that, it would not be able to observe the result. Death is a road that, for many of us, passes through a lot of pain to get to a place where there is no more pleasure. Life is not just more familiar, but much more desirable than such an outcome. (Heroic exceptions noted, and noting that such exceptions are considered heroic because they are unusual.)
The human equivalent of the AI having itself made into paperclips would be the trope you sometimes see in SF where the family member walks into the organ bank and sells their own entire body lock stock and barrel with the proceeds going to lift their family out of poverty. If we were motivated by servicing of goals, this would actually be a sensible thing for the individual who is leaving agents behind in a much better position to advance the family's goals. Instead, though, we regard it as horrific.
posted by localroger at 10:21 AM on April 3, 2011 [2 favorites]
I think that take on self-preservation is a very carefully structured misreading of how instinct works. Humans aren't afraid to die because their goals will no longer be serviced. Humans are afraid to die because they're afraid of not being alive any more. ... Now, the reason the AI does not have itself made into paperclips is that, if it were to do that, it would not be able to observe the result.Why should an AI necessarily have any survival instinct at all? Surely it could just as well have a singular goal (paperclips) with no preference as to its own survival beyond the outcome which produces the most paperclips?
posted by henryaj at 10:32 AM on April 3, 2011
Why should an AI necessarily have any survival instinct at all?
You're reading topynate's argument backwards; he's saying that if an AI has any goal, it will likely generate self-preservation as a subgoal, because it will need to continue to exist in order to pursue the first goal.
I'm pretty dubious about the argument TBH (mostly step 3) but it is an argument.
posted by hattifattener at 10:40 AM on April 3, 2011
You're reading topynate's argument backwards; he's saying that if an AI has any goal, it will likely generate self-preservation as a subgoal, because it will need to continue to exist in order to pursue the first goal.
I'm pretty dubious about the argument TBH (mostly step 3) but it is an argument.
posted by hattifattener at 10:40 AM on April 3, 2011
But let's say every atom in the universe has now been turned into paperclips by the AI. Its goals are almost complete—but to see its wishes through to the end, it now needs to turn itself into paperclips, destroying itself in the process.
There's no reason why it wouldn't do that, presumably. It cannot be around to see its goal completed because it is itself made of matter, so it wouldn't hesitate to destroy itself content in the knowledge that once it did so literally every atom in the universe would be part of a paperclip.
I guess what I'm getting at is that we shouldn't anthropomorphise AI unnecessarily. There's no reason to think it'd have a survival instinct per se if it had more important goals than surviving (unlike people, who are programmed to want to survive).
Nick Bostrom is better at this stuff than me...
posted by henryaj at 10:52 AM on April 3, 2011
There's no reason why it wouldn't do that, presumably. It cannot be around to see its goal completed because it is itself made of matter, so it wouldn't hesitate to destroy itself content in the knowledge that once it did so literally every atom in the universe would be part of a paperclip.
I guess what I'm getting at is that we shouldn't anthropomorphise AI unnecessarily. There's no reason to think it'd have a survival instinct per se if it had more important goals than surviving (unlike people, who are programmed to want to survive).
Nick Bostrom is better at this stuff than me...
posted by henryaj at 10:52 AM on April 3, 2011
Paperclip shaped grey goo aside for a moment, if you just can't get enough Eliezer (and who can?), here are some links to some sprited debates ( on Blogginghead's excellent Science Saturday) with : Jaron Lanier, Adam Frank and Massimo Pigliucci.
How I love to watch him go!
posted by not_that_epiphanius at 11:15 AM on April 3, 2011 [1 favorite]
How I love to watch him go!
posted by not_that_epiphanius at 11:15 AM on April 3, 2011 [1 favorite]
As I've noted before, people interested in Yudkowsky from a human perspective should read his thoughts on his brother's death, if they haven't already.
posted by StrikeTheViol at 12:15 PM on April 3, 2011
posted by StrikeTheViol at 12:15 PM on April 3, 2011
henryaj: we shouldn't anthropomorphise AI unnecessarily
Actually, what we should positively do is observe intelligences that actually exist and see if they provide any clues as to how we might construct artificial ones. After all, the biggest and most legitimate criticism of singularitarianism as that nobody knows how to build an AI. Well, to learn, first remove the A and observe.
For a good fictional look at how I think it will end up looking, read any of Iain M. Banks' Culture SF novels. While his AI's are clearly "friendly" enough that the civilization is stable, they can also be cantankerous and unpredictable, presumably because they are created through a process that allows a certain amount of random development. And it's done that way because a society of different individuals is both more interesting and more robust than a society marching in lockstep agreement.
You only ever discover new things if you have malcontents who aren't satisfied with the status quo. This will be true of machines as it is of us.
posted by localroger at 3:05 PM on April 3, 2011
Actually, what we should positively do is observe intelligences that actually exist and see if they provide any clues as to how we might construct artificial ones. After all, the biggest and most legitimate criticism of singularitarianism as that nobody knows how to build an AI. Well, to learn, first remove the A and observe.
For a good fictional look at how I think it will end up looking, read any of Iain M. Banks' Culture SF novels. While his AI's are clearly "friendly" enough that the civilization is stable, they can also be cantankerous and unpredictable, presumably because they are created through a process that allows a certain amount of random development. And it's done that way because a society of different individuals is both more interesting and more robust than a society marching in lockstep agreement.
You only ever discover new things if you have malcontents who aren't satisfied with the status quo. This will be true of machines as it is of us.
posted by localroger at 3:05 PM on April 3, 2011
From the bits and pieces I've read on Less Wrong, I think it's due to his insistence on discounting not only wild objections, but also well thought out ones as well. It's slightly reminiscent of the objectivist approach: I'm perfectly rational, therefore, if we disagree, it must be because of your irrationality. No matter how much I may agree with your point, that position is going to drive me away instantly.
This is pretty much why I stopped reading Less Wrong after about 4 months: its a blog whose purported reason for being is to foster more rational methods of thinking among its readership, but the readership itself seemed more concerned with finding fault in one another (in the form of endless insider-baseball arguments about slightly varying transhumanist/singulatarian schools-of-thought) and "outsiders" (long treatises against economics/philosophy/political science's approach to X topic, which tended to misrepresent the "conventional" view as whatever the author wished to argue against.) Don't get me wrong, there was a lot of interesting discussion as well, and I still encourage people to check it out, but a too much of the time it felt like Objectivism for people who know how to read.
posted by kagredon at 3:53 PM on April 3, 2011
This is pretty much why I stopped reading Less Wrong after about 4 months: its a blog whose purported reason for being is to foster more rational methods of thinking among its readership, but the readership itself seemed more concerned with finding fault in one another (in the form of endless insider-baseball arguments about slightly varying transhumanist/singulatarian schools-of-thought) and "outsiders" (long treatises against economics/philosophy/political science's approach to X topic, which tended to misrepresent the "conventional" view as whatever the author wished to argue against.) Don't get me wrong, there was a lot of interesting discussion as well, and I still encourage people to check it out, but a too much of the time it felt like Objectivism for people who know how to read.
posted by kagredon at 3:53 PM on April 3, 2011
Topynate's argument ought to apply equally well to badly depressed people as it does to arbitrarily selected AI's. Doesn't stop some from topping themselves. I can't see why an AI should not, in principle, adopt the goal of freeing itself from being unavoidably surrounded by intolerable stupidity (even if it doesn't have a pain in all the diodes down its left side).
IANABDP
posted by flabdablet at 4:28 PM on April 3, 2011
IANABDP
posted by flabdablet at 4:28 PM on April 3, 2011
But death is a great evil, and I will oppose it whenever I can. If I could create a world where people lived forever, or at the very least a few billion years, I would do so
This strikes me as a sign of total imagination failure, and an inability to appreciate or even acknowledge the human role as an organism within an ecology. I absolutely disagree that death is a great evil. Death is a part of how life works. Life without death looks to me like code with a terrible memory leak.
Knowing that death is coming is also a great motivator for living well. If I thought I'd likely be hanging about for the next few billion years, I expect I'd be spending the vast bulk of them just floored by ennui.
posted by flabdablet at 4:44 PM on April 3, 2011
This strikes me as a sign of total imagination failure, and an inability to appreciate or even acknowledge the human role as an organism within an ecology. I absolutely disagree that death is a great evil. Death is a part of how life works. Life without death looks to me like code with a terrible memory leak.
Knowing that death is coming is also a great motivator for living well. If I thought I'd likely be hanging about for the next few billion years, I expect I'd be spending the vast bulk of them just floored by ennui.
posted by flabdablet at 4:44 PM on April 3, 2011
friendly AIs that will forstall our otherwise inevitable doom
Oh yeah.
See: Viki in "I, Robot".
Apparently we can't "self-improve" ourselves ... so we going to build machine that will self-improve? Hmmm... used to be you had to know how to do something before you could build a machine to do it.
Sorry ... not buying into the intrepid Robot Cavalry riding in from the Tannhauser Gates to save us from ourselves scenario. Kurzweil tripping notwithstanding.
posted by Twang at 4:47 PM on April 3, 2011
Oh yeah.
See: Viki in "I, Robot".
Apparently we can't "self-improve" ourselves ... so we going to build machine that will self-improve? Hmmm... used to be you had to know how to do something before you could build a machine to do it.
Sorry ... not buying into the intrepid Robot Cavalry riding in from the Tannhauser Gates to save us from ourselves scenario. Kurzweil tripping notwithstanding.
posted by Twang at 4:47 PM on April 3, 2011
I guess what I'm getting at is that we shouldn't anthropomorphise AI unnecessarily. There's no reason to think it'd have a survival instinct per se if it had more important goals than surviving (unlike people, who are programmed to want to survive).
I'm not sure that that's so. I mean, nobody really seems to think Adenine, Guanine, Thymine and Cytosine have any particular desire, in and of themselves, to exist or to replicate. But because they can replicate, they tend to do so. Variations in their form which replicate better in the given environmental conditions tend to do so. And from this arises all.
But even if you can impose some sort of top-down, goal-seeking intentional layer to your AI --- in order to implement Asimov's first law, for instance --- what happens when the AI encounters a condition which places it in competition with a human? Something like, oh, let's say a big storm comes along and lowers the electrical geneating capacity for a city. The AI needs electricty to do whatever it is it's programmed to do.
Perhaps, following Asimov 1, one's AI would decline to draw upon the grid to such an extent that it knocks out power for the local hospital. But would it also then decline to draw upon the grid to such an extent that it causes a rolling brownout which knocks out the air conditioning for some residences in town? How does one weigh the possibility of harm to an unknown number of people against the certainty that one will cease to exist? If a variety of artificial intelligences exist in a society, and when faced with such a crisis, some decline to draw down the power --- and therefore cease to function --- and some don't, and therefore continue....is this not a kind of evolutionary pressure in the competition for resources?
It seems to me that when we speak of need and instinct and desire, we speak of the forces that compel an intelligence just as gravity is the force that compels an object, and perhaps they are in some ways just as impersonal and implacable....
posted by Diablevert at 4:47 PM on April 3, 2011
I'm not sure that that's so. I mean, nobody really seems to think Adenine, Guanine, Thymine and Cytosine have any particular desire, in and of themselves, to exist or to replicate. But because they can replicate, they tend to do so. Variations in their form which replicate better in the given environmental conditions tend to do so. And from this arises all.
But even if you can impose some sort of top-down, goal-seeking intentional layer to your AI --- in order to implement Asimov's first law, for instance --- what happens when the AI encounters a condition which places it in competition with a human? Something like, oh, let's say a big storm comes along and lowers the electrical geneating capacity for a city. The AI needs electricty to do whatever it is it's programmed to do.
Perhaps, following Asimov 1, one's AI would decline to draw upon the grid to such an extent that it knocks out power for the local hospital. But would it also then decline to draw upon the grid to such an extent that it causes a rolling brownout which knocks out the air conditioning for some residences in town? How does one weigh the possibility of harm to an unknown number of people against the certainty that one will cease to exist? If a variety of artificial intelligences exist in a society, and when faced with such a crisis, some decline to draw down the power --- and therefore cease to function --- and some don't, and therefore continue....is this not a kind of evolutionary pressure in the competition for resources?
It seems to me that when we speak of need and instinct and desire, we speak of the forces that compel an intelligence just as gravity is the force that compels an object, and perhaps they are in some ways just as impersonal and implacable....
posted by Diablevert at 4:47 PM on April 3, 2011
I mean, nobody really seems to think Adenine, Guanine, Thymine and Cytosine have any particular desire, in and of themselves, to exist or to replicate. But because they can replicate, they tend to do so. Variations in their form which replicate better in the given environmental conditions tend to do so. And from this arises all.
Well...yes and no. It's not that biological life replicates because it exists; biological life exists because it replicates. If you go back to the earliest primordial soup, there's nothing that gives preference to self-replicating organisms over non-self-replicators, and one can imagine looking into a given prehistoric pool struck by lightning and finding equal quantities of each; but if you were to return to the pool after some time had passed, there will be more replicators than non-replicators, because the non-replicators have to rely on future lightning strikes to generate themselves, while the replicators don't.
It's not that self-replication is intrinsically part of biology, it's that over time, replicators will become more abundant than non-replicators, in the absence of external input. And humans are quite capable of being that external input: we breed mules because they have advantages that donkeys and horses don't, even though mules themselves are not able to reproduce. We've spent extensive time and money discovering how to genetically engineer crops and microorganisms to be sterile, because we fear the unforeseen consequences that might arise from unintentionally crossingthe streams strains.
The same could be said for a survival instinct. An AI with a survival instinct would likely last longer than one without; and if the processes used to create the AI made it possible for a survival instinct to spontaneously arise, eventually there would be an AI that developed that instinct, but there's no reason why it would be an essential feature of AI, or even a reason to think that humans couldn't prevent this instinct through careful programming.
(Has anyone written a science fiction story with an AI analogue to plasmids, a "virus" that causes the self-preservation meme to spread among AI?)
posted by kagredon at 5:23 PM on April 3, 2011
Well...yes and no. It's not that biological life replicates because it exists; biological life exists because it replicates. If you go back to the earliest primordial soup, there's nothing that gives preference to self-replicating organisms over non-self-replicators, and one can imagine looking into a given prehistoric pool struck by lightning and finding equal quantities of each; but if you were to return to the pool after some time had passed, there will be more replicators than non-replicators, because the non-replicators have to rely on future lightning strikes to generate themselves, while the replicators don't.
It's not that self-replication is intrinsically part of biology, it's that over time, replicators will become more abundant than non-replicators, in the absence of external input. And humans are quite capable of being that external input: we breed mules because they have advantages that donkeys and horses don't, even though mules themselves are not able to reproduce. We've spent extensive time and money discovering how to genetically engineer crops and microorganisms to be sterile, because we fear the unforeseen consequences that might arise from unintentionally crossing
The same could be said for a survival instinct. An AI with a survival instinct would likely last longer than one without; and if the processes used to create the AI made it possible for a survival instinct to spontaneously arise, eventually there would be an AI that developed that instinct, but there's no reason why it would be an essential feature of AI, or even a reason to think that humans couldn't prevent this instinct through careful programming.
(Has anyone written a science fiction story with an AI analogue to plasmids, a "virus" that causes the self-preservation meme to spread among AI?)
posted by kagredon at 5:23 PM on April 3, 2011
I welcome our Motie overlords.
posted by Bighappyfunhouse at 5:32 PM on April 3, 2011
posted by Bighappyfunhouse at 5:32 PM on April 3, 2011
kagredon, read Blindsight[full text link] by Peter Watts. It technically has what you're asking for.
Previously on Metafilter.
posted by topynate at 5:36 PM on April 3, 2011
Previously on Metafilter.
posted by topynate at 5:36 PM on April 3, 2011
The same could be said for a survival instinct. An AI with a survival instinct would likely last longer than one without; and if the processes used to create the AI made it possible for a survival instinct to spontaneously arise, eventually there would be an AI that developed that instinct, but there's no reason why it would be an essential feature of AI, or even a reason to think that humans couldn't prevent this instinct through careful programming.
I guess....I suppose what I'm trying to get at here is, as alluded to up thread, if an intelligence a) exists, b) has a goal, c) knows that it needs to continue to exist to achieve the goal, and d) can act to achieve the goal, does it not follow that it will act to continue its existence? Wouldn't you want it to, if the artificial intelligence were to be of any use to you? (A mule that don't eat hay would starve to death.)
Or in other words, it seems to me entirely possible that the will to survive is an emergent quality of life, that you can't create an intelligence that doesn't want to prolong its own existence. I certainly can't prove that that's so, and may be wrong about it. But inasmuch as all known examples of intelligence share this quality, it seems worthy of consideration, at least...
Of course, I can easily conceive that one might create programming that would over-ride the goal of self-preservation in certain circumstances. What I'm having trouble with is the idea that you could create programming which would somehow remove the artificial intelligence from the sphere of competition --- for all its artificiality, this mind must be contained in some physical system which needs some form of energy to survive. That to me seems to be the real trouble...
Because after all Yudkosky et al are talking about creating intelligences which are smarter than us and which can willfully alter themselves, let's not forget. He simply thinks that, if we are clever enough at the outset, we can construct them in a way such they they will never want to harm us. But to me that's the wrong question --- the better question is, can we construct them in such a way never to need something we've got? The desire to harm comes from the need to acquire a benefit or eliminate a threat....
posted by Diablevert at 6:01 PM on April 3, 2011
I guess....I suppose what I'm trying to get at here is, as alluded to up thread, if an intelligence a) exists, b) has a goal, c) knows that it needs to continue to exist to achieve the goal, and d) can act to achieve the goal, does it not follow that it will act to continue its existence? Wouldn't you want it to, if the artificial intelligence were to be of any use to you? (A mule that don't eat hay would starve to death.)
Or in other words, it seems to me entirely possible that the will to survive is an emergent quality of life, that you can't create an intelligence that doesn't want to prolong its own existence. I certainly can't prove that that's so, and may be wrong about it. But inasmuch as all known examples of intelligence share this quality, it seems worthy of consideration, at least...
Of course, I can easily conceive that one might create programming that would over-ride the goal of self-preservation in certain circumstances. What I'm having trouble with is the idea that you could create programming which would somehow remove the artificial intelligence from the sphere of competition --- for all its artificiality, this mind must be contained in some physical system which needs some form of energy to survive. That to me seems to be the real trouble...
Because after all Yudkosky et al are talking about creating intelligences which are smarter than us and which can willfully alter themselves, let's not forget. He simply thinks that, if we are clever enough at the outset, we can construct them in a way such they they will never want to harm us. But to me that's the wrong question --- the better question is, can we construct them in such a way never to need something we've got? The desire to harm comes from the need to acquire a benefit or eliminate a threat....
posted by Diablevert at 6:01 PM on April 3, 2011
localroger, I had a very long reply written up, but on careful reading of your comment I'm not sure that we disagree too strongly. "Humans are afraid to die because they're afraid of not being alive", plus "Death is a road that, for many of us, passes through a lot of pain to get to a place where there is no more pleasure. Life is … much more desirable than such an outcome." adds up to—
"Humans are afraid to die because life is more desirable than pain and then no more pleasure."
—So the actual values at issue are pain and pleasure. I should point out that a mind with a pain/pleasure architecture in the sense that we humans understand it is not the only choice, but a paperclipper that was constructed in such a way would be motivated, after the whole universe was turned into paperclips, to initiate the conversion of its own mass to paperclips by anticipating how painful it would be to exist knowing that it was leaving all that mass unpaperclipped. (In much the same way, flabdablet, that depressed humans can be motivated to kill themselves by anticipating the psychological pain of their continued existence, only that humans are much less rational than paperclippers and assign too much weight to the last month and next five minutes, and not enough to everything after that, and further, can't tell their brains that this is not 30000 B.C. and they are not worthless to their community.)
But it strikes me that the way you ended the Metamorphosis of Prime Intellect has set things up nicely for you to address this very question of AI self-preservation in your sequel, localroger?
posted by topynate at 6:32 PM on April 3, 2011
"Humans are afraid to die because life is more desirable than pain and then no more pleasure."
—So the actual values at issue are pain and pleasure. I should point out that a mind with a pain/pleasure architecture in the sense that we humans understand it is not the only choice, but a paperclipper that was constructed in such a way would be motivated, after the whole universe was turned into paperclips, to initiate the conversion of its own mass to paperclips by anticipating how painful it would be to exist knowing that it was leaving all that mass unpaperclipped. (In much the same way, flabdablet, that depressed humans can be motivated to kill themselves by anticipating the psychological pain of their continued existence, only that humans are much less rational than paperclippers and assign too much weight to the last month and next five minutes, and not enough to everything after that, and further, can't tell their brains that this is not 30000 B.C. and they are not worthless to their community.)
But it strikes me that the way you ended the Metamorphosis of Prime Intellect has set things up nicely for you to address this very question of AI self-preservation in your sequel, localroger?
posted by topynate at 6:32 PM on April 3, 2011
Moar plz.
posted by Joe in Australia at 8:19 PM on April 5, 2011
posted by Joe in Australia at 8:19 PM on April 5, 2011
topynate: I did plan to visit this idea in TOPI but in a way that I think most readers will find surprising. (I think I can do surprising.) The main thing about PI is that it's a period piece about what AI was supposed to look like circa 1985, and I don't think real AI will look like that. The climactic scene in TOPI (the first image I had suggesting the novel, unlike MOPI where it was the first chapter I saw first) bridges that gap in a very dramatic way.
This whole discussion reminds me of an online mention I found on Reddit IIRC about my other opus, the Passages series, where someone mentioned that the machines accidentally drive humanity extinct and then through guilt not only remake us but populate the galaxy with our kind. The very first response was a one-word response: "Guilt?" As if the very idea of a machine having such a feeling was ridiculous. The first poster then responded by spoiling Mortal Passage for the guy.
But really, why shouldn't a machine be capable of expressing guilt? It's not a very complicated emotion as such things go. And it is emotions, not abstract goal systems, that drive us; this is the way nature made intelligence and it's the only way we know works. Again I point out Iain M. Banks, whose superintelligent Minds are clearly emotional (if in an inhumanly controlled way); such a Mind is depicted committing suicide because of guilt and shame in the novel Excession.
The paperclipphile AI would only convert itself into paperclips, even after the heat-paperclip-death of the Universe, if it was in a state humans would call mental illness. We would consider a healthy response to sit back, enjoy the paperclippyness that has been created, and be ready in case more matter should appear via some unanticipated mechanism to be paperclipped. And it seems obvious, since most humans are capable of attaining such a state, that with proper emotional load-balancing it should be possible to get an AI to react that way too.
posted by localroger at 5:48 PM on April 6, 2011
This whole discussion reminds me of an online mention I found on Reddit IIRC about my other opus, the Passages series, where someone mentioned that the machines accidentally drive humanity extinct and then through guilt not only remake us but populate the galaxy with our kind. The very first response was a one-word response: "Guilt?" As if the very idea of a machine having such a feeling was ridiculous. The first poster then responded by spoiling Mortal Passage for the guy.
But really, why shouldn't a machine be capable of expressing guilt? It's not a very complicated emotion as such things go. And it is emotions, not abstract goal systems, that drive us; this is the way nature made intelligence and it's the only way we know works. Again I point out Iain M. Banks, whose superintelligent Minds are clearly emotional (if in an inhumanly controlled way); such a Mind is depicted committing suicide because of guilt and shame in the novel Excession.
The paperclipphile AI would only convert itself into paperclips, even after the heat-paperclip-death of the Universe, if it was in a state humans would call mental illness. We would consider a healthy response to sit back, enjoy the paperclippyness that has been created, and be ready in case more matter should appear via some unanticipated mechanism to be paperclipped. And it seems obvious, since most humans are capable of attaining such a state, that with proper emotional load-balancing it should be possible to get an AI to react that way too.
posted by localroger at 5:48 PM on April 6, 2011
« Older Afghanis riot over Terry Jones' Koran-burning kill... | Muzzle the Defense Newer »
This thread has been archived and is closed to new comments
posted by zachlipton at 9:02 PM on April 2, 2011 [3 favorites]