You’re presented with two boxes, one open and one closed. In the open one, you can see a $1000 bill. But what’s in the closed one? Well, either nothing, or $1 million. And here are your choices: you may either take both boxes, or just the closed box.This is a very interesting problem for several reasons. One, it touches upon the notion of free will vs. determinism. Here's a comment by Q:
[T]hese boxes were prepared by a computer program which, employing advanced predictive algorithms, is able to analyze all the nuances of your character and past behavior and predict your choice with near-perfect accuracy. And if the computer predicted that you would choose to take just the closed box, then it has put $1 million in it; if the computer predicted you would take both boxes, then it has put nothing in the closed box.
This paradox is really stupid... "Assume you have no choice. Then what is your choice ?"But then further down, ClockBackward comments that this ability to predict the future does not annihilate free will:
If the prediction of the computer is perfect, there is no such thing as a strategy or a choice anyway, there is no question, no answer and no paradox.
By the way... even if the predictor can predict what I am going to do with near 100% accuracy, that doesn't imply that I don't have any "choice" or "free will" regarding what I do. One way to think about this is to suppose that the predictor has the ability to time travel. Its prediction method could be as follows: Put money into both boxes and then (before the game starts) time travel into the future to see what decision I make during the game, and then time travel back and (again, before the game starts) remove the money from the closed box if I choose both boxes in the future. The predictor in this case is just figuring out my choice, not taking my choice way. Of course, time travel may not be possible, and the time travel idea may have other theoretical difficulties, but hopefully this illustrates the point that near perfect predictability does not eliminate the possibility of "choice". In a similar vein, just because a friend of mine can predict with near 100% accuracy that I will choose chocolate over vanilla, that doesn't imply that I'm not making a genuine choice.So, apparently, different people view the influence of determinism in free will very differently. Personally, I side with Q on this question. If the computer can predict perfectly (not near-perfectly), then there is no choice, though we may feel that we are making a choice. In that case, hopefully you were predetermined to be a one-boxer. I'm not sure using arguments about traveling backwards in time are ever sound, because I am of the observation that sending information back in time is a logical impossibility. But again, the interesting lesson for me here is that people think very differently about free will.
Another interesting thing that I have encountered several times before, is the idea that making a rational choice does not include what we (rationally) know about the human psyche. Here's Julia's example of a real-world problem analogous to Newcombs paradox:
Now here's the real-life analogy I promised you (adapted from Gary Drescher's thought-provoking Good and Real): imagine you're stranded on a desert island, dying of hunger and thirst. A man in a rowboat happens to paddle by, and offers to transport you back to shore, if you promise to give him $1000 once you get there. But take heed: this man is extremely psychologically astute, and if you lie to him, he'll almost certainly be able to read it in your face.What does this ignore? Further down in the comments, James says
So you see where I'm going with this: you'll be far better off if you can promise him the money, and sincerely mean it, because that way you get to live. But if you're rational, you can't make that promise sincerely — because you know that once he takes you to shore, your most rational move at that stage will be to say, “Sorry, sucka!” and head off with both your life and your money. If only you could somehow pre-commit now to being irrational later!
I would dispute that fleeing once you reach land is the rational choice. If the result of fleeing is that you will die, how is loosing your life a better payoff than loosing $1,000 dollars?And Julia's answer.
No, you don't die if you flee -- once you're at the point where you're trying to decide whether to flee, you've already been saved. Once you're on dry land, your choices are between fleeing (and saving your money) or paying the guy.But, but... How do we know what happens as a consequence of fleeing? Julia just assumes that after fleeing, that's the end of the story (I realize that they are talking about dying when not saved by the fisherman). However, real life is not like that. It may be that after fleeing, the fisherman will not be able to affect your future life in any way, in which case you could say you'd be in the clear a thousand dollars richer (or less poor - which is even better, psychologists have found out). However, this can never be known for sure (unless you're Newcombs computer, I suppose). But who knows? Maybe he will get upset and hunt you down. Maybe he will tell everyone he knows about your misdeed, and you will get a bad rep.
The reason why many humans would pay him the thousand dollars, is that we're "honest people", and the reason we're mostly honest, is that we know that not being honest can have dire consequences. In the past, when people lived in much small groups, this was certainly true; if we cheated someone who had saved us, then everyone would soon know, and our reputation (and thus fitness) would suffer. It's fun to make problems like Newcombs paradox that highlight these issues, but what really astonishes me is that psychologists and economists actually make experiments with real people, in which it is concluded that people make what seems like irrational choices despite the fact that they know for certain that no one else will ever know if they cheat. In game theory settings where the participant can earn money by either cooperating with or cheating another participant (often not even a real human, but a computer program, though this is not divulged, so assumed unknown, though when I participated twice, I both times knew for almost certain that there were no real human at the other end of the line), the researchers are at least sometimes surprised that people make the seemingly irrational choice of sharing money with the opponent, even when they "know" that they will only play once, and never meet the opponent again. The participant is told that this is the case, and the participant may accept this rationally. However, the choices we make are just not only based on rational thought, but on our instinctive feelings, too. And those instincts you just cannot make believe that you will never meet your opponent again, because nothing in our past experience has ever shown us that his is certain, or even very likely. The second interesting lesson for me is that many clever people completely fail to grasp this point, that "rational" is rarely that straightforward.
In summary, the reason why we would pay the fisherman the money we promised him, is that we are honest, and that we'll feel guilty if we don't. And we are honest and feel guilty because that ensures that we treat each other fairly, which ultimately works out better for ourselves, because we live in groups of people that we will meet again, some sunny day.
* As opposed to, say, Pharyngula, in which the comments are mostly dreck. It's been years since I paid any attention to them whatsoever.