Field of Science

What I learned from Newcomb's paradox

On Rationally Speaking (one of the best blogs around with regards to the comments*), Julia Galef wrote a post about Newcombs paradox:
You’re presented with two boxes, one open and one closed. In the open one, you can see a $1000 bill. But what’s in the closed one? Well, either nothing, or $1 million. And here are your choices: you may either take both boxes, or just the closed box.

(...)

[T]hese boxes were prepared by a computer program which, employing advanced predictive algorithms, is able to analyze all the nuances of your character and past behavior and predict your choice with near-perfect accuracy. And if the computer predicted that you would choose to take just the closed box, then it has put $1 million in it; if the computer predicted you would take both boxes, then it has put nothing in the closed box.
This is a very interesting problem for several reasons. One, it touches upon the notion of free will vs. determinism. Here's a comment by Q:
This paradox is really stupid... "Assume you have no choice. Then what is your choice ?"

If the prediction of the computer is perfect, there is no such thing as a strategy or a choice anyway, there is no question, no answer and no paradox.
But then further down, ClockBackward comments that this ability to predict the future does not annihilate free will:
By the way... even if the predictor can predict what I am going to do with near 100% accuracy, that doesn't imply that I don't have any "choice" or "free will" regarding what I do. One way to think about this is to suppose that the predictor has the ability to time travel. Its prediction method could be as follows: Put money into both boxes and then (before the game starts) time travel into the future to see what decision I make during the game, and then time travel back and (again, before the game starts) remove the money from the closed box if I choose both boxes in the future. The predictor in this case is just figuring out my choice, not taking my choice way. Of course, time travel may not be possible, and the time travel idea may have other theoretical difficulties, but hopefully this illustrates the point that near perfect predictability does not eliminate the possibility of "choice". In a similar vein, just because a friend of mine can predict with near 100% accuracy that I will choose chocolate over vanilla, that doesn't imply that I'm not making a genuine choice.
So, apparently, different people view the influence of determinism in free will very differently. Personally, I side with Q on this question. If the computer can predict perfectly (not near-perfectly), then there is no choice, though we may feel that we are making a choice. In that case, hopefully you were predetermined to be a one-boxer. I'm not sure using arguments about traveling backwards in time are ever sound, because I am of the observation that sending information back in time is a logical impossibility. But again, the interesting lesson for me here is that people think very differently about free will.

Another interesting thing that I have encountered several times before, is the idea that making a rational choice does not include what we (rationally) know about the human psyche. Here's Julia's example of a real-world problem analogous to Newcombs paradox:
Now here's the real-life analogy I promised you (adapted from Gary Drescher's thought-provoking Good and Real): imagine you're stranded on a desert island, dying of hunger and thirst. A man in a rowboat happens to paddle by, and offers to transport you back to shore, if you promise to give him $1000 once you get there. But take heed: this man is extremely psychologically astute, and if you lie to him, he'll almost certainly be able to read it in your face.

So you see where I'm going with this: you'll be far better off if you can promise him the money, and sincerely mean it, because that way you get to live. But if you're rational, you can't make that promise sincerely — because you know that once he takes you to shore, your most rational move at that stage will be to say, “Sorry, sucka!” and head off with both your life and your money. If only you could somehow pre-commit now to being irrational later!
What does this ignore? Further down in the comments, James says
I would dispute that fleeing once you reach land is the rational choice. If the result of fleeing is that you will die, how is loosing your life a better payoff than loosing $1,000 dollars?
And Julia's answer.
No, you don't die if you flee -- once you're at the point where you're trying to decide whether to flee, you've already been saved. Once you're on dry land, your choices are between fleeing (and saving your money) or paying the guy.
But, but... How do we know what happens as a consequence of fleeing? Julia just assumes that after fleeing, that's the end of the story (I realize that they are talking about dying when not saved by the fisherman). However, real life is not like that. It may be that after fleeing, the fisherman will not be able to affect your future life in any way, in which case you could say you'd be in the clear a thousand dollars richer (or less poor - which is even better, psychologists have found out). However, this can never be known for sure (unless you're Newcombs computer, I suppose). But who knows? Maybe he will get upset and hunt you down. Maybe he will tell everyone he knows about your misdeed, and you will get a bad rep.

The reason why many humans would pay him the thousand dollars, is that we're "honest people", and the reason we're mostly honest, is that we know that not being honest can have dire consequences. In the past, when people lived in much small groups, this was certainly true; if we cheated someone who had saved us, then everyone would soon know, and our reputation (and thus fitness) would suffer. It's fun to make problems like Newcombs paradox that highlight these issues, but what really astonishes me is that psychologists and economists actually make experiments with real people, in which it is concluded that people make what seems like irrational choices despite the fact that they know for certain that no one else will ever know if they cheat. In game theory settings where the participant can earn money by either cooperating with or cheating another participant (often not even a real human, but a computer program, though this is not divulged, so assumed unknown, though when I participated twice, I both times knew for almost certain that there were no real human at the other end of the line), the researchers are at least sometimes surprised that people make the seemingly irrational choice of sharing money with the opponent, even when they "know" that they will only play once, and never meet the opponent again. The participant is told that this is the case, and the participant may accept this rationally. However, the choices we make are just not only based on rational thought, but on our instinctive feelings, too. And those instincts you just cannot make believe that you will never meet your opponent again, because nothing in our past experience has ever shown us that his is certain, or even very likely. The second interesting lesson for me is that many clever people completely fail to grasp this point, that "rational" is rarely that straightforward.

In summary, the reason why we would pay the fisherman the money we promised him, is that we are honest, and that we'll feel guilty if we don't. And we are honest and feel guilty because that ensures that we treat each other fairly, which ultimately works out better for ourselves, because we live in groups of people that we will meet again, some sunny day.

* As opposed to, say, Pharyngula, in which the comments are mostly dreck. It's been years since I paid any attention to them whatsoever.

10 comments:

  1. I was put off by the "not paying the guy is the rational choice" line as well. I'm glad you articulated that point.

    ReplyDelete
  2. Yeah, cheating the guy seems very shortsighted, doesn't it.

    ReplyDelete
  3. Alilum fur allesAugust 15, 2010 2:08 PM

    Your point about Pharyngula is spot on as well.

    ReplyDelete
  4. This is very similar to the argument I made in the polygamy thread in discussing "pragmatism" with Dr. Arend. Only a very short-sighted and narrow view of rationality would say you should run.

    Wouldn't most of us be wracked with guilt if we cheated a person who saved our life? Granted, the guilt itself may be irrational, but welcome to being a member of H. sapiens. Taking it as a given that one is, most likely, a non-sociopathic human -- with all of the irrational concerns that go along with that -- the most rational choice given those conditions might very well be to pay the man, to avoid being consumed by guilt!

    ReplyDelete
  5. Even for a person who wouldn't feel guilty, it would still be a good choice to pay up as promised, for the very reason that we normally feel guilty when we don't. Gossip, rumor, ostracism, revenge, etc.

    ReplyDelete
  6. BTW, I'm afraid I fall in the camp where I don't fully understand/buy into the thought experiment. More details about the nature of the prediction are necessary.

    My argument would be that from a purely rational perspective, if the machine is even reasonably accurate, you should always choose just the one box. Of course, there's marginal utility to take into account (a 100% chance of $1000 may in practice be better than a 10% chance of $1 million, if you can only do the trial once and you really need a thousand bucks!) but even still, if it was, say 99% accurate, then take the one box. You've got a 99% chance of getting a million bucks then, right! Seems trivial to me.

    But I get the impression that the thought experiment is supposing something more than this... That like, if you pick the one box "insincerely" (whatever that means) that it doesn't count. I don't really understand that. If there were no computer doing the predicting, you always would take both boxes unless you're an idiot. Right? So if the computer is predicting whether you would have taken only one box if you didn't know the computer was fudging things, then always take both boxes -- unless you know you are really dumb.

    I guess I just don't get it...

    ReplyDelete
  7. I am more puzzled by the idea that there is a paradox in the first place when you talked about the machine.

    Imagine there is a machine that predicts the future correctly, you make your choice between taking the box with a million dollars, or to go home with 1000, there is no paradox, and free will wasn't violated in the first place.

    It is simply a mechanism that follows from your action, just that in the story an event was placed in the past which could have happened in the present.

    About the boat story: The one coming about the boat is asking you something, let us imagine he knows you perfectly well, he asks the question anyway even if he knows you will defect (other wise the story would not have happened). Now imagine you actually say yes I pay you, even if you know you run. What will happen? In order not to get in the boat the one in it makes another decision, again not violating any causality or creating a paradox.

    If the man in the boat knew in advance than there was no choice for you in the first place, done. It is like God putting you in the stand and ask: Will you sin? Since there is no free will, and since God knows in advance, there was no choice, you end in heaven or hell due because it was predetermined...

    Cheers Arend

    ReplyDelete
  8. I'm working on a little research project to test responses to this problem. If you have an opinion about the problem, please take my survey.

    http://www.surveymonkey.com/s/YQ3LSMM

    It's only 4 questions!

    ReplyDelete
  9. nice survey,

    what I don't understand is why anyone should not take the opaque box, since that is the only way to get 1.000.000?

    You could argue that me trusting you is equal to me believing in faith... but that is not even close to the point. 1.000.000 and a loss of maybe 1000 is an actual chance, with given probabilities. Nothing more nothing less. Whereas believing in God is throwing any rhyme or reason or probability over board. It is like arguing that you will alwasy cheat, so not entering the game will give me 10.000.000? There is no point in believing this or anything like that either.

    Cheers Arend

    ReplyDelete
  10. Heh, whenever someone claims to understand my decision making process to a high degree (substantially greater accuracy than 50%), and I need to make a binary choice to confuse this entity, I outsource my decision making process to atleast bring it on an even keel.

    I.e. assign 'take 1 box' and 'take 2 boxes' to the two faces of a coin and then flip it.

    No matter how good this computer is at predicting my decision making algorithms, it hasn't been designed to to figure out the results of the coin toss. If the accuracy was 90%, it'd drastically go down to 50% and maximize my chances of getting a million.

    IF I wasn't bothered to toss a coin, I'd just take both boxes, since apparently the closed box will always be empty with a perfect prediction protocol.

    ReplyDelete

Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS