A Beautiful Post
February 28, 2003 2:49 PM Subscribe
The Ultimate Game. Game theory was applied extensively by US foreign policy-makers during the Cold War, and many would credit those "moves" with the triumph of the West. But can it work now? Are rogue states and terrorists "rational actors?" Are we seeing a classic two-player game playing out with the US and Iraq? What does it even mean to "win" in the post-Soviet era? If these theories interest you, try these online simulations.
A direct application of game theory and the cold war: Balance of Power.
posted by Ogre Lawless at 3:17 PM on February 28, 2003
posted by Ogre Lawless at 3:17 PM on February 28, 2003
I normally hate posts like this, but [this is good]. Thanks.
posted by y6y6y6 at 3:48 PM on February 28, 2003
posted by y6y6y6 at 3:48 PM on February 28, 2003
If you're interested in this sort of thing there's some discussion of game theory and its detractors, particularly relating to the validity of the idea of 'rational actors' and the use of GT in the Cold War in this book Against The Gods: The Remarkable Story of Risk
. FWIW, I vaguely remembering the author citing a paper that credited the US' 'moves' during the CW with the broad proliferation of world-ending nuclear weaponry.
I was just reading about this last night. One of the most interesting bits was about game theory's sort of fundamental question ('how would you choose best for yourself given rational play by your competitors') and how rational you can realistically expect your competitors to be. The book has summaries of all these experiments that show that humans, both expert and naive in a given area, are hugely non-rational. Now that I type that, it sounds pretty obvious, but this is one that I remember:
These two guys (names Kahneman and Tversky) posed this series of questions to their test group(s):
Imagine a new disease breaks out in your little island community of 600 folks or something and a you have two treatment choices:
A: 200 of your people will be saved.
B: 33% probability that everyone will be saved, 67% that everyone will die
In this test, 72% of people asked chose treatment A.
Next question: You manage to get off that rock before all the craziness breaks out and find yourself a new 600 person island. Wouldn't you know it, a new disease is breaking out there. You are bad luck. In any case, you again have two treatment options:
C: 400 of the 600 people will die.
D: 33% probability that no one will die, 67% that everyone will.
78% of surveyed subjects chose D in this case. What gives? If human rationality behaved the way it does in game theory, and 200 certain deaths was the preferable, rational choice in the first scenario, it should also be prefered to the same alternative in the second. The neat, consistent ordering of preferences that GT in politics and economics relies on doesn't seem to exist. That's just one of the bits in the book. It didn't say much more about the details AIR and I apologize if I've made some fundamental error but it's a pretty cool book. There are a lot of other examples that aren't based on wording issues that more directly display what those two guys called 'failure of invariance' of preferences (ie (A is-preferable-to B) and (B is-preferable-to C) doesn't imply A is-preferable-to C)
posted by jeb at 3:51 PM on February 28, 2003
. FWIW, I vaguely remembering the author citing a paper that credited the US' 'moves' during the CW with the broad proliferation of world-ending nuclear weaponry.
I was just reading about this last night. One of the most interesting bits was about game theory's sort of fundamental question ('how would you choose best for yourself given rational play by your competitors') and how rational you can realistically expect your competitors to be. The book has summaries of all these experiments that show that humans, both expert and naive in a given area, are hugely non-rational. Now that I type that, it sounds pretty obvious, but this is one that I remember:
These two guys (names Kahneman and Tversky) posed this series of questions to their test group(s):
Imagine a new disease breaks out in your little island community of 600 folks or something and a you have two treatment choices:
A: 200 of your people will be saved.
B: 33% probability that everyone will be saved, 67% that everyone will die
In this test, 72% of people asked chose treatment A.
Next question: You manage to get off that rock before all the craziness breaks out and find yourself a new 600 person island. Wouldn't you know it, a new disease is breaking out there. You are bad luck. In any case, you again have two treatment options:
C: 400 of the 600 people will die.
D: 33% probability that no one will die, 67% that everyone will.
78% of surveyed subjects chose D in this case. What gives? If human rationality behaved the way it does in game theory, and 200 certain deaths was the preferable, rational choice in the first scenario, it should also be prefered to the same alternative in the second. The neat, consistent ordering of preferences that GT in politics and economics relies on doesn't seem to exist. That's just one of the bits in the book. It didn't say much more about the details AIR and I apologize if I've made some fundamental error but it's a pretty cool book. There are a lot of other examples that aren't based on wording issues that more directly display what those two guys called 'failure of invariance' of preferences (ie (A is-preferable-to B) and (B is-preferable-to C) doesn't imply A is-preferable-to C)
posted by jeb at 3:51 PM on February 28, 2003
Game theory was also applied to NASCAR in this First Monday article (via /.), you know, for us who wouldn't understand otherwise.
And here's the Slate article about the First Monday paper, for those of us with shorter attention spans.
posted by pitchblende at 4:10 PM on February 28, 2003
And here's the Slate article about the First Monday paper, for those of us with shorter attention spans.
posted by pitchblende at 4:10 PM on February 28, 2003
Just so you all know, the people most critical of game theory and it's limitations are the game theorists themselves. empirical experiments like the one you describe have long diverged from the very sharp predictions made by theory, and most of the advances in game theory have been results of trying to reconcile the two.
Economists are loathe to give up rationality, or that they do not optimize, because they don't think people will systematically make errors once they know those errors, but they also realize that people cooperate and trust far more often than the theory says they should. This has led to refinements of the rationality concept, such as bounded rationality and sequential rationality. To the effect, that, if other players are not quite rational, it is rational for to be not quite rational, and that if the history of play doesn't adhere to rationality, maybe you shouldn't be rational in the future.
So don't think that the problems of rationality are ignored or unknown in the profession. Quite the contrary.
posted by dcodea at 4:13 PM on February 28, 2003
Economists are loathe to give up rationality, or that they do not optimize, because they don't think people will systematically make errors once they know those errors, but they also realize that people cooperate and trust far more often than the theory says they should. This has led to refinements of the rationality concept, such as bounded rationality and sequential rationality. To the effect, that, if other players are not quite rational, it is rational for to be not quite rational, and that if the history of play doesn't adhere to rationality, maybe you shouldn't be rational in the future.
So don't think that the problems of rationality are ignored or unknown in the profession. Quite the contrary.
posted by dcodea at 4:13 PM on February 28, 2003
One of the founding fathers of game theory, John von Neumann (who some say was the model for Dr. Strangelove), was very concerned about the sticky issue of humans acting logically.
He was driven to create this discipline out of frustration at the success of Gödel's Theorem, which problematized the ideal of a consistent and complete set theory, a project towards which von Neumann had been working with Hilbert. Gödel's Theorem, like Heisenberg's Principle, represented another telling defeat for strict determinism. Even so, when humans were involved — as in game theory applied to economic strategies — von Neumann treated all the elements of a system as if they could be described through formal logic. His mathematical method seemed driven by deeper impulses than the sheer power of logic. For him, the struggle between chance and determinism seemed like an elemental duel between good and evil.
Sound familiar (evil empire, axis of evil, etc.)? One of John von Neumann's more startling--and influential--ideas was "preventative war." Basically, he wanted to bomb the Soviets first--back in the 50's. (Oh yeah, he also helped invent the nuclear bomb).
posted by _sirmissalot_ at 4:18 PM on February 28, 2003
He was driven to create this discipline out of frustration at the success of Gödel's Theorem, which problematized the ideal of a consistent and complete set theory, a project towards which von Neumann had been working with Hilbert. Gödel's Theorem, like Heisenberg's Principle, represented another telling defeat for strict determinism. Even so, when humans were involved — as in game theory applied to economic strategies — von Neumann treated all the elements of a system as if they could be described through formal logic. His mathematical method seemed driven by deeper impulses than the sheer power of logic. For him, the struggle between chance and determinism seemed like an elemental duel between good and evil.
Sound familiar (evil empire, axis of evil, etc.)? One of John von Neumann's more startling--and influential--ideas was "preventative war." Basically, he wanted to bomb the Soviets first--back in the 50's. (Oh yeah, he also helped invent the nuclear bomb).
posted by _sirmissalot_ at 4:18 PM on February 28, 2003
Cool! As a half-assed practitioner of game theory myself, let me note that you omitted what is to me the jazziest part of game theory.
Signaling and informational models talk about the role that information plays -- what can I infer from your behavior, and how can that help me? The classic example is the beer/quiche game -- a bully is sitting at a bar looking for someone to beat up. A newcomer walks in, who might be a Wimp or might be a Tough -- the bully can't see that, though. All he can see is what the newcomer orders, which can be (you guessed it) beer and quiche. Toughs prefer beer and Wimps prefer quiche, though neither of them wants to get in a fight with the bully.
Signaling models are about what the bully knows after he sees the newcomer order. The usual answers are that if he sees you order quiche, he knows that you're a Wimp and beats you up -- but if he sees you order beer, he can't be sure of what type you are. Toughs order beer because they like it, and some Wimps order beer to try not to get beaten up -- this is called signal jamming
This pops up in all sorts of places; any time someone is trying to give you an impression of themselves. What does it mean if your blind date shows up and brings you a rose? Maybe he's nice, maybe he wants a little somethin'-somethin'. What does it mean if he brings you a dead puppy? That you should run away, now. The same logic (positive information is junk, negative information is useful) turns up in campaigning, which is one reason why we see negative ads -- they actually carry more information than an ad with me telling you my wonderful plans and policies.
Then there are the reverse -- screening models -- and the related world of strategic information transmission models, which ask about what you know when the car repairman says that your car needs an $800 repair. But I'll stop waxing rhapsodic now and just tell you that it's the COOLEST THING EVER!!! except for the radio-controlled blimp.
posted by ROU_Xenophobe at 4:29 PM on February 28, 2003
Signaling and informational models talk about the role that information plays -- what can I infer from your behavior, and how can that help me? The classic example is the beer/quiche game -- a bully is sitting at a bar looking for someone to beat up. A newcomer walks in, who might be a Wimp or might be a Tough -- the bully can't see that, though. All he can see is what the newcomer orders, which can be (you guessed it) beer and quiche. Toughs prefer beer and Wimps prefer quiche, though neither of them wants to get in a fight with the bully.
Signaling models are about what the bully knows after he sees the newcomer order. The usual answers are that if he sees you order quiche, he knows that you're a Wimp and beats you up -- but if he sees you order beer, he can't be sure of what type you are. Toughs order beer because they like it, and some Wimps order beer to try not to get beaten up -- this is called signal jamming
This pops up in all sorts of places; any time someone is trying to give you an impression of themselves. What does it mean if your blind date shows up and brings you a rose? Maybe he's nice, maybe he wants a little somethin'-somethin'. What does it mean if he brings you a dead puppy? That you should run away, now. The same logic (positive information is junk, negative information is useful) turns up in campaigning, which is one reason why we see negative ads -- they actually carry more information than an ad with me telling you my wonderful plans and policies.
Then there are the reverse -- screening models -- and the related world of strategic information transmission models, which ask about what you know when the car repairman says that your car needs an $800 repair. But I'll stop waxing rhapsodic now and just tell you that it's the COOLEST THING EVER!!! except for the radio-controlled blimp.
posted by ROU_Xenophobe at 4:29 PM on February 28, 2003
Ha! Check this out. It's an account of an experiment done back in Nash's day. Hi-Larious. Funnier if you know game theory, I think.
posted by dcodea at 4:31 PM on February 28, 2003
posted by dcodea at 4:31 PM on February 28, 2003
As far as rationality goes, rational-choice theories in general work well when you can appeal to some sort of evolutionary pressure that will drive out actors who consistently behave irrationally.
So in run-of-the-mill economics, you might be able to expect rationality from firms, because firms that don't behave rationally either change their tune or get competed out of the market (usually/eventually). Likewise, we might model the interactions of political elites or candidates with rat-choice theory, because candidates who campaign in irrational ways or who consistently behave irrationally tend to lose elections to people who do a better job of it.
But you wouldn't really want to model an individual's vote choice as a truly rational decision -- if you screw up and vote for the wrong person, there is no consequence whatsoever to you unless your vote happens to make or break a tie (the odds of which are effectively zero).
In the Kahneman/Tversky example, you'll note, there is no consequence whatsoever; it's just an opinion question. And there's an abundant body of opinion research telling us that how a question is worded (or framed) has a profound effect of how people respond to the question. Now, if you actually presented hospitals with these choices, consistently over years and years, I'd wager that you'd eventually see hospitals behaving consistently, since the ones that chose badly would get sued out of existence.
posted by ROU_Xenophobe at 4:37 PM on February 28, 2003
So in run-of-the-mill economics, you might be able to expect rationality from firms, because firms that don't behave rationally either change their tune or get competed out of the market (usually/eventually). Likewise, we might model the interactions of political elites or candidates with rat-choice theory, because candidates who campaign in irrational ways or who consistently behave irrationally tend to lose elections to people who do a better job of it.
But you wouldn't really want to model an individual's vote choice as a truly rational decision -- if you screw up and vote for the wrong person, there is no consequence whatsoever to you unless your vote happens to make or break a tie (the odds of which are effectively zero).
In the Kahneman/Tversky example, you'll note, there is no consequence whatsoever; it's just an opinion question. And there's an abundant body of opinion research telling us that how a question is worded (or framed) has a profound effect of how people respond to the question. Now, if you actually presented hospitals with these choices, consistently over years and years, I'd wager that you'd eventually see hospitals behaving consistently, since the ones that chose badly would get sued out of existence.
posted by ROU_Xenophobe at 4:37 PM on February 28, 2003
dcodea- I just meant that that stuff was new to me. This book is written for a very broad audience and it was just summing up little bits of papers by game theorists and economists and so on. In my brief encounters with that material in college (in computer science and economics), I just sort of took the economic idea of rational behavior as one of those 'this isn't quite *real*, but it's a reasonable assumption to work from' type things. The book though made me realize that it's much farther from workable than I'd thought.
Can you recommend any more books about 'bounded rationality' and so on that are meant for people that...well, don't know anything about economics or game theory? I thought it was really neat. Do these other ideas of rationality apply, even when playing actual games? I was thinking about that the other day because I know that the certain people I used to play poker with make consistent breaks with rational behavior, and knowing that, it is most rational for me to play in a way that would be sub-optimal given rational competition. I'm sure these newer definitions of rationality are of great interest to economists, but what about people who write game programs, or auction bots? Is there formal theory about this stuff?
posted by jeb at 4:41 PM on February 28, 2003
Can you recommend any more books about 'bounded rationality' and so on that are meant for people that...well, don't know anything about economics or game theory? I thought it was really neat. Do these other ideas of rationality apply, even when playing actual games? I was thinking about that the other day because I know that the certain people I used to play poker with make consistent breaks with rational behavior, and knowing that, it is most rational for me to play in a way that would be sub-optimal given rational competition. I'm sure these newer definitions of rationality are of great interest to economists, but what about people who write game programs, or auction bots? Is there formal theory about this stuff?
posted by jeb at 4:41 PM on February 28, 2003
As far as rationality goes, rational-choice theories in general work well when you can appeal to some sort of evolutionary pressure that will drive out actors who consistently behave irrationally.
There were a bunch of examples about lifelong professional investors, doctors choosing among possible treatments, people selecting radiation vs. surgery as a cancer treatment, and so on where all of those pressures were in place (possibly not the patients) and the actors still consistently behaved in a way contrary to the principles of classical rationality. I'm sure 'irrational' isn't the right word...but not von Neumann-type rational.
posted by jeb at 4:46 PM on February 28, 2003
There were a bunch of examples about lifelong professional investors, doctors choosing among possible treatments, people selecting radiation vs. surgery as a cancer treatment, and so on where all of those pressures were in place (possibly not the patients) and the actors still consistently behaved in a way contrary to the principles of classical rationality. I'm sure 'irrational' isn't the right word...but not von Neumann-type rational.
posted by jeb at 4:46 PM on February 28, 2003
John von Neumann (who some say was the model for Dr. Strangelove)
Though more likely Teller, Kissinger, Kahn or Szilard.
posted by vacapinta at 4:46 PM on February 28, 2003
Though more likely Teller, Kissinger, Kahn or Szilard.
posted by vacapinta at 4:46 PM on February 28, 2003
It's also important to note that for game theory, being rational doesn't mean you're smart or sane. All it means is that you're doing things that you think will get you what you think you want.
Jeff Dahmer was rational -- he wanted success in life, so he did what he thought would bring it: he killed and ate people and made little altar-ey things out of their parts. He was wrong, because he had seriously whacked beliefs about how the world works, but what he did was rational given those beliefs.
Jeb: your old poker buddies might well have been playing rationally, but with different payoff structures than you thought they had -- they might have wanted to make sure you lost money, or they might have been more concerned with relative gains than absolute and structured their preferences accordingly. Alternatively, they might have behaved seemingly-irrationally as part of "mixed strategy" -- randomizing their behavior to make it less predictable, so they're harder to exploit than you are. Or they may have just been irrational, the twisted freaks.
posted by ROU_Xenophobe at 4:49 PM on February 28, 2003
Jeff Dahmer was rational -- he wanted success in life, so he did what he thought would bring it: he killed and ate people and made little altar-ey things out of their parts. He was wrong, because he had seriously whacked beliefs about how the world works, but what he did was rational given those beliefs.
Jeb: your old poker buddies might well have been playing rationally, but with different payoff structures than you thought they had -- they might have wanted to make sure you lost money, or they might have been more concerned with relative gains than absolute and structured their preferences accordingly. Alternatively, they might have behaved seemingly-irrationally as part of "mixed strategy" -- randomizing their behavior to make it less predictable, so they're harder to exploit than you are. Or they may have just been irrational, the twisted freaks.
posted by ROU_Xenophobe at 4:49 PM on February 28, 2003
Jeb: sure, people are still partly irrational all the way down. But modeling them as rational actors is cheap and easy (you can get software to solve an awful lot of games for you), and gets you farther than you were before, and with a model that you can hold in your head. It's a close-enough-for-government-work thing, and sometimes rat-choice isn't even that good (like with vote choice, frex)
posted by ROU_Xenophobe at 4:52 PM on February 28, 2003
posted by ROU_Xenophobe at 4:52 PM on February 28, 2003
Economists have always view rationality as just-this-side-of-absurd, when it came to expecting it from individuals. Rationality, to economists means a) people know what choices they can make, b) people know what their goal is and c) people know how their choices affect getting their goal. This is a lot to ask of someone. A rational chess player, knwos what the goal is and what the moves are, but can he tell how any given move at any point affects checkmating his oppenent? if he were rational, he would. And the theories based on this sort of rationality make very shapr prediction. For example, Chess has a unique perfect equilbrium strategy to play. How do I know? The game is finite, and such a strategy exists for any finite game. Fat lot of good that'll do you if you play chess.
So Economists know that there are no "really" rational actors out there. And for a long time rationality was a conecept used only in aggregate; ie if people are rationel, the economy will act like this. Until the early eighties, economists had no good theory for 2-person interactions, the the prediction of the existing theory( theory of nash and whatnot) made predicitons very much at odds with observation and intuition.
posted by dcodea at 4:55 PM on February 28, 2003
So Economists know that there are no "really" rational actors out there. And for a long time rationality was a conecept used only in aggregate; ie if people are rationel, the economy will act like this. Until the early eighties, economists had no good theory for 2-person interactions, the the prediction of the existing theory( theory of nash and whatnot) made predicitons very much at odds with observation and intuition.
posted by dcodea at 4:55 PM on February 28, 2003
I never thought of it that different-payoff-structure way. I figured, 'it's poker, the point is to get the most money' but maybe they looked at it differently. It seems to make sense. I think one kid I knew valued the appearance of being a ballsy player more than winning the money, so given that, he was certainly rational.
posted by jeb at 4:57 PM on February 28, 2003
posted by jeb at 4:57 PM on February 28, 2003
Though more likely Teller, Kissinger, Kahn or Szilard.
Well, I can Google too--but it was probably a composite of some combination of all of them.
posted by _sirmissalot_ at 5:09 PM on February 28, 2003
Well, I can Google too--but it was probably a composite of some combination of all of them.
posted by _sirmissalot_ at 5:09 PM on February 28, 2003
O.K. -- you asked for it, guys: a little Kubrickian game theory re: the rationality of actors when total annihilation of life on earth is at stake (long comment, sorry)
Hello? Hello, Dimitri? Listen, I can't hear too well, do you suppose you could turn the music down just a little? Oh, that's much better. Yes. Fine, I can hear you now, Dimitri. Clear and plain and coming through fine. I'm coming through fine too, eh? Good, then. Well then as you say we're both coming through fine. Good. Well it's good that you're fine and I'm fine. I agree with you. It's great to be fine.
(laughs)
Now then Dimitri. You know how we've always talked about the possibility of something going wrong with the bomb. The bomb, Dimitri. The hydrogen bomb. Well now what happened is, one of our base commanders, he had a sort of, well he went a little funny in the head. You know. Just a little... funny. And uh, he went and did a silly thing.
(listens)
Well, I'll tell you what he did, he ordered his planes... to attack your country.
(listens)
Well let me finish, Dimitri. Let me finish, Dimitri.
(listens)
Well, listen, how do you think I feel about it? Can you imagine how I feel about it, Dimitri? Why do you think I'm calling you? Just to say hello?
(listens)
Of course I like to speak to you. Of course I like to say hello. Not now, but any time, Dimitri. I'm just calling up to tell you something terrible has happened.
(listens)
It's a friendly call. Of course it's a friendly call. Listen, if it wasn't friendly, ... you probably wouldn't have even got it. They will not reach their targets for at least another hour.
(listens)
I am... I am positive, Dimitri. Listen, I've been all over this with your ambassador. It is not a trick.
(listens)
Well I'll tell you. We'd like to give your air staff a complete run down on the targets, the flight plans, and the defensive systems of the planes.
(listens)
Yes! I mean, if we're unable to recall the planes, then I'd say that, uh, well, we're just going to have to help you destroy them, Dimitri.
(listens)
I know they're our boys.
(listens)
Alright, well, listen... who should we call?
(listens)
Who should we call, Dimitri?
(listens)
The people...? Sorry, you faded away there.
(listens)
The People's Central Air Defense Headquarters. Where is that, Dimitri?
(listens)
In Omsk. Right. Yes.
(listens)
Oh, you'll call them first, will you?
(listens)
Uh huh. Listen, do you happen to have the phone number on you, Dimitri?
(listens)
What? I see, just ask for Omsk Information. I'm sorry too, Dimitri. I'm very sorry.
(listens)
Alright! You're sorrier than I am! But I am sorry as well. I am as sorry as you are, Dimitri. Don't say that you are more sorry than I am, because I am capable of being just as sorry as you are. So we're both sorry, alright?
~
posted by matteo at 9:39 PM on February 28, 2003
Hello? Hello, Dimitri? Listen, I can't hear too well, do you suppose you could turn the music down just a little? Oh, that's much better. Yes. Fine, I can hear you now, Dimitri. Clear and plain and coming through fine. I'm coming through fine too, eh? Good, then. Well then as you say we're both coming through fine. Good. Well it's good that you're fine and I'm fine. I agree with you. It's great to be fine.
(laughs)
Now then Dimitri. You know how we've always talked about the possibility of something going wrong with the bomb. The bomb, Dimitri. The hydrogen bomb. Well now what happened is, one of our base commanders, he had a sort of, well he went a little funny in the head. You know. Just a little... funny. And uh, he went and did a silly thing.
(listens)
Well, I'll tell you what he did, he ordered his planes... to attack your country.
(listens)
Well let me finish, Dimitri. Let me finish, Dimitri.
(listens)
Well, listen, how do you think I feel about it? Can you imagine how I feel about it, Dimitri? Why do you think I'm calling you? Just to say hello?
(listens)
Of course I like to speak to you. Of course I like to say hello. Not now, but any time, Dimitri. I'm just calling up to tell you something terrible has happened.
(listens)
It's a friendly call. Of course it's a friendly call. Listen, if it wasn't friendly, ... you probably wouldn't have even got it. They will not reach their targets for at least another hour.
(listens)
I am... I am positive, Dimitri. Listen, I've been all over this with your ambassador. It is not a trick.
(listens)
Well I'll tell you. We'd like to give your air staff a complete run down on the targets, the flight plans, and the defensive systems of the planes.
(listens)
Yes! I mean, if we're unable to recall the planes, then I'd say that, uh, well, we're just going to have to help you destroy them, Dimitri.
(listens)
I know they're our boys.
(listens)
Alright, well, listen... who should we call?
(listens)
Who should we call, Dimitri?
(listens)
The people...? Sorry, you faded away there.
(listens)
The People's Central Air Defense Headquarters. Where is that, Dimitri?
(listens)
In Omsk. Right. Yes.
(listens)
Oh, you'll call them first, will you?
(listens)
Uh huh. Listen, do you happen to have the phone number on you, Dimitri?
(listens)
What? I see, just ask for Omsk Information. I'm sorry too, Dimitri. I'm very sorry.
(listens)
Alright! You're sorrier than I am! But I am sorry as well. I am as sorry as you are, Dimitri. Don't say that you are more sorry than I am, because I am capable of being just as sorry as you are. So we're both sorry, alright?
~
posted by matteo at 9:39 PM on February 28, 2003
summaries of all these experiments that show that humans, both expert and naive in a given area, are hugely non-rational
this is also a (questioned) assumption in economics - see "the market experience" by lane, for example.
posted by andrew cooke at 6:57 AM on March 1, 2003
this is also a (questioned) assumption in economics - see "the market experience" by lane, for example.
posted by andrew cooke at 6:57 AM on March 1, 2003
« Older Weapons of Mass Delusion? | Budget game Newer »
This thread has been archived and is closed to new comments
posted by Stan Chin at 2:54 PM on February 28, 2003