The following is a slightly modified version of a post to the now defunct discussion group ASP (Association for Systematic Philosophy) a group without which Wikipedia would probably not exist.
Newcomb's paradox, named after its creator, physicist William Newcomb,
is one of the most widely debated paradoxes of recent times. It was first
made popular by Harvard philosopher Robert Nozick. The following is
based on Martin Gardner's and Robert Nozick's Scientific American
papers on the subject, both of which can be found in Gardner's book
Knotted Doughnuts. The paradox goes like this:
A highly superior being from another part of the galaxy presents
you with two boxes, one open and one closed. In the open box
there is a thousand-dollar bill. In the closed box there is
either one million dollars or there is nothing. You are to
choose between taking both boxes or taking the closed box only.
But there's a catch.
The being claims that he is able to predict what any human being will
decide to do. If he predicted you would take only the closed
box, then he placed a million dollars in it. But if he
predicted you would take both boxes, he left the closed box
empty. Furthermore, he has run this experiment with 999 people
before, and has been right every time.
What do you do?
On the one hand, the evidence is fairly obvious that if you
choose to take only the closed box you will get one million
dollars, whereas if you take both boxes you get only a measly
thousand. You'd be stupid to take both boxes.
On the other hand, at the time you make your decision, the
closed box already is empty or else contains a million dollars.
Either way, if you take both boxes you get a thousand dollars
more than if you take the closed box only.
As Nozick points out, there are two
accepted principles of decision theory in conflict here. The
expected-utility principle (based on the probability of each
outcome) argues that you should take the closed box only.
The dominance principle, however, says that if one strategy
is always better, no matter what the circumstances, then you
should pick it. And no matter what the closed box contains,
you are $1000 richer if you take both boxes than if you take
the closed one only.
One can make the argument for taking both boxes even more vivid
by changing the setup a bit. For instance, suppose that the closed box
is open on the face opposing you, so that you can't see its contents
but an experiment moderator can. The moderator is watching you decide
between one box and both boxes, and the money is there in front
of his eyes. Wouldn't he think you are a fool for not taking
both boxes?
Some proposed solutions are attempts to show that the
situation as presented cannot occur. For instance, some say that
it is impossible to predict human behavior with this kind of
accuracy. That may very well be true, but even if it is
physically impossible, that is not a satisfactory
solution to a logical problem. Provided it is logically
possible, we are still faced with a paradox.
Martin Gardner seems to be of the opinion that the situation is
logically impossible. Basically he argues that, since the paradox
presents us with two equally valid but inconsistent solutions, the
situation can never occur. And he implies that the reason it is
a logical impossibility has to do with paradoxes that can arise
when predictions causally interact with the predicted event. For
instance, if the being tells you that he has predicted you will have
eggs for breakfast, why couldn't you decide to have cereal
instead? And if you do have cereal, then did the being really predict
correctly? He may very well have predicted correctly in the
sense that, had he not told you about it, he would have been
correct. But by giving you the information, he added something
to the equation that was not there when he made his prediction,
thereby nullifying it. *
So, just as there can be no barbers who shave only those who do
not shave themselves, there can be no predictions that causally
interact with the predicted events. Gardner is certainly right
about this. What I don't think he is right about is in applying it to
Newcomb's paradox. The being isn't telling you whether you will choose
both boxes or not, so no self-defeating interaction is involved.
One might argue that there is a causal interaction at another
level. The being is predicting that you will make a decision, so what
happens if you refuse to do anything at all? This problem can,
however, be circumvented very easily. The being can predict what you
will do after he's told you of the experiment, and not have
any contact with you (direct or indirect) between the time of
his prediction and the time of the experiment. If you decide not
to participate, he can predict you will do just that (and what
he puts in the closed box is then irrelevant).
Although I disagree with Gardner, the notion of a prediction
causally interacting with the predicted event plays a significant role
in the difference between my view and Nozick's. Let's turn now to
Nozick's analysis.
One possible solution that Nozick considers is the following:
The dominance principle is not valid in Newcomb's paradox
because the states ("1 million is placed in the box" and
"nothing is placed in the box") are not probabilistically
independent of the actions ("take both" and "take only closed
box"). The dominance principle is acceptable only if the states
are probabilistically independent of the actions.
Nozick disregards this solution by means of a counter-example:
Suppose there is a hypochondriac who knows that a certain gene
he may have inherited will probably cause him to die early and
will also make him more likely to be an academic than an
athlete. He is trying to decide whether to go to graduate school
or to become a professional basketball player. Would it be
reasonable for him to decide to become a basketball player
because he fears that, if his decision were to go to graduate
school, that means he probably has the gene and will therefore
die? We certainly would not think that is reasonable. Whether he
has the gene or not is already determined. His decision to
become a basketball player will not alter that fact. And yet,
here too, the probabilities of the states ("has the gene" and
"does not have the gene") are not probabilistically independent
of the actions ("decides to go to graduate school" and "decides
to become a basketball player").
Both Gardner and Nozick conclude (though for different reasons)
that, if they were faced with the situation presented by the paradox,
they would take both boxes.
My solution to the paradox:
A paradox occurs when there are apparently conclusive reasons supporting
inconsistent propositions. In the case of Newcomb's paradox, we have two arguments
(both of which seem equally strong) for making opposite choices. The question is
whether the paradox succeeds in making the opposing arguments equally
strong. If it doesn't, then there actually is no paradox (or, to put it another
way, the paradox will have been resolved). I don't think the two arguments
are in fact equally strong, for, given the setup of the paradox, one can find
a reasonable explanation for the alien being's ability to predict correctly.
And in that case, the argument for taking the closed box is stronger than that
for taking both boxes.
In order to explain why taking only the closed box is the more reasonable
decision, let's first consider what it means to predict something. Prediction
can mean at least one of two things. There's scientific prediction,
where someone has observed similar conditions many times and
predicts the outcome of a situation based on this experience
(and on the assumption of some principle of uniformity in
nature). This is how you predict that if you let go of your pencil it
will fall, and how the weatherman predicts (though usually less
successfully) what tomorrow will be like. And then there's
prescience, where someone supposedly "senses" the future.
Nostradamus isn't supposed to have been just a really good weatherman, he
is supposed to have foreseen the future. This second kind of
prediction is the equivalent of information traveling back in
time. Now, whether the prediction is scientific or prescient, the solution to the
paradox is essentially the same. But since I don't accept prescience, and because I don't
think that that is how the paradox is usually understood, I will limit my
explanation to scientific prediction only.
If the being predicts in the manner of a scientist, that means that
there is a certain state of affairs, A, which holds at some
point in time prior to your decision and the prediction, and
which causes both. This connection between the prediction and
the decision is what prevents your actions from being
probabilistically independent of the states of the box. And it
is realizing this that makes it rational to take the closed box
only (i.e., it is what invalidates the dominance principle).
But now what about the hypochondriac? Why isn't his decision to
play basketball just as acceptable? Here is where the
interaction between the prediction and the event predicted comes
in. In the case of Newcomb's paradox, there is no causal
interaction (as already explained above). In the case of the
hypochondriac there is. If the hypochondriac chose to go to
graduate school without any knowledge of the gene's influence on
behavior, then we would be justified in saying that his decision
was in fact evidence for the presence of the gene. But because
he knows about the gene's effects, he is in exactly the same
situation as the person who is told by the being what he will have for
breakfast. He now has an additional datum, not in the original
prediction based on the gene, which in effect nullifies the
prediction. Thus his choice to become a basketball player does
not make him any less likely to have the gene.
Nozick and Gardner's choice to take both boxes, on the other
hand, make them much less likely to make a million.