20th World Congress of Philosophy Logo

Philosophy of Values

What Does Nozick's Experience Machine Argument Really Prove?

Eduardo Rivera-López
Torcuato Di Tella University
erivera@utdt.edu

bluered.gif (1041 bytes)

ABSTRACT: Nozick's well-known Experience Machine argument can be considered a typically successful argument: as far as I know, it has not been discussed much and has been widely seen as conclusive, or at least convincing enough to refute the mental-state versions of utilitarianism. I believe that if his argument were conclusive, its destructive effect would be even stronger. It would not only refute mental-state utilitarianism, but all theories (whether utilitarian or not) considering a certain subjective mental state (happiness, pleasure, desire, satisfaction) as the only valuable state. I shall call these theories "mental state welfarist theories." I do not know whether utilitarianism or, in general, mental-state welfarism is plausible, but I doubt that Nozick's argument is strong enough to prove that it is not.

bluered.gif (1041 bytes)

I

Nozick's well-known Experience Machine argument can be considered as a typically successful argument: as far as I know, it has not been very discussed and has been widely seen as conclusive, or at least convincing enough to refute the mental-state versions of Utilitarianism. (1) Indeed, I believe that if his argument were conclusive, its destructive effect would be even stronger. It would not only refute mental-state utilitarianism, but all theories (whether utilitarian or not) considering a certain subjective mental state (happiness, pleasure, desire satisfaction) as the only valuable state. I shall call these theories "mental state welfarist theories." (2)

I do not know whether utilitarianism or, in general, mental-state welfarism is plausible. But I doubt that Nozick's argument is strong enough to prove that it is not.

This note tries to explain my doubts. Let us begin by briefly recalling the argument:

"Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences? [...] Of course, while in the tank you won't know that you're there; you'll think that it's all actually happening [...] Would you plug in?." (3)

II

According to a first interpretation of Nozick's argument, it proves (or attempts to prove) that we have strong reasons not to plug into the Machine. Such reasons could not be accepted by mental state Welfarism. The point is whether it would be rational (or right) for us to plug in or not. That is why Nozick states repeatedly the question: "would you plug in?"

Why would not we plug into the Machine? I think there may be several answers to this question. I will try to argue that Nozick's answer, namely, that plugging in would undermine our dignity, is not the only possible answer; a mental-state Welfarism theory would have a consistent answer too. I will argue that Nozick's claim that mental state welfarism leads us inevitably to plug in stems from a distorted and naive interpretation of this view. If this is true, this first interpretation of Nozick's argument qua anti-welfarist argument would be clearly weakened.

The first step toward a Welfarist answer (4)to the question of why we would not plug in would be that people (at least those who would not plug in) usually have the second-order desire that his or her (first and second order) desires are actually satisfied. Suppose that I want to write a great novel. This desire is in turn twofold: on one hand, it involves the first-order desire (hereafter D1) of having the feeling of writing it, (and thereafter, the feeling of having written it), and, on the other, the second-order desire (hereafter, D2) of knowing that I am writing (or that I have written) a great novel and that this is not a mere deceptive feeling. (5)D2 is extremely important for us. It is a basic desire and it is the reason why we often prefer to face a painful truth rather than an illusory satisfaction. D1 as well as D2 are part of the generic desire of writing a great novel. Of course, there are other preferences which intrinsically do not involve second-order desires, because the content of the desire is having a certain feeling. For example, wishing to listen to Mozart's music, or having a good meal, or experiencing pleasure or excitement. (6)But most of our more important desires, such as personal or communitary development, or relationships with others (love, friendship, etc.) involve as a central element the second-order preference that the desired state of affairs does actually take place.

Keeping in mind the distinction between D1 and D2, we can give a first answer to the question of plugging or not into the Machine: We would not want to plug in, simply because we believe that instances of D2-type desires will not be satisfied by plugging in.

Now, the satisfaction of any preference has two aspects: on one hand, the subjective one, i.e., the belief that the desire is being or has been satisfied, and, on the other, an objective one, i.e., the fact that the desire has been satisfied. I can desire being healthy and satisfy my desire only subjectively (e.g., because I am sick, although I believe I am healthy), or only objectively (e.g., because I am healthy but I think I am not), or neither subjectively nor objectively (because I am sick and know I am), or both subjectively and objectively (because I am healthy and know I am). Of course, the same distinction can be applied to D2 as well. So, D2 can be satisfied in any of these dimensions. For example, it seems possible that D2 is only subjectively satisfied, i.e., that I believe that D1 really is satisfied, even though this is not true. As a matter of fact, in this case there would not be any problem, since, when asked if we want to plug into the Machine, we should be persuaded that the Machine would enable us to write effectively the great novel. In this case, we would be willing to plug in. (7)If someone asked us if we want to plug into the Machine and we believed (mistakenly) that D2 will be fulfilled, then we would be glad to plug in and be happy. But obviously this is not Nozick's argument, which states that, at the moment of deciding whether to plug in or not, we know that our desires are only going to be subjectively satisfied.

Nozick could argue: but once you are connected, you will firmly believe that your desires are satisfied. What are ten minutes of frustration of D2 (until you plug in), compared with a whole life of complete satisfaction of this desire (since D2 will be satisfied, although subjectively)?

Nevertheless, the answer is clear-cut: your are not asking me if I want to plug in while I am connected but before I plug in. And, in this moment (I am not connected yet), I believe that the Machine will not satisfy D2. Therefore, and since the frustration of D2 will diminish drastically the value of any experience, I would refuse to plug into the Machine. Now it is irrelevant that, once I am connected, I will believe that D1 is really satisfied and thus D2 will be subjectively satisfied. At this moment, D2 is not satisfied, not even subjectively. Let us recall that D2 does not refer only to my present desires, but to future ones too. If this desire is important (as it seems to be for most of us), then we will be very careful not to carry out actions (like taking drugs or plugging into the Machine) that may lead us to believe that our desires are being satisfied, although this is not objectively true. This yields a quite problematic situation for Nozick's argument: if he asks me if I want to plug in while I am not connected, I would refuse to do it, but, if I am already plugged in, the question (i.e. if I would desire to remain connected) cannot be posed, since it would reveal the underlying deceit of the situation.

From the point of view of an agent choosing between plugging in or not, the only point Nozick's argument proves is something a mental-state Welfarist would accept, namely, that the object of our desires is not having certain mental states. I do not desire (primarily) to experience happiness while writing a novel, but I do desire to write it in fact. If this were not the case, it would not make sense to have desires whose satisfaction we will never experience, like, for instance, that the human species does not extinguish in the next 200 years, or that my great-great-grandchild is happy when he or she enters adulthood. However, such kind of desires does make sense, e.g., I might participate in ecological societies or strive to give my great-great-grandchild goods that can help him to have a wealthier living. But this does not entail that it makes sense to say, after 200 years (or when my great-great-grandchild is adult), that my utility has increased or diminished.

Stated this way, it is clear that the Machine argument confuses two different claims: supporting mental-state Welfarism is not the same as claiming that everybody would (or should) desire to experience certain mental states. (8)Welfarism refers to an external criterion allowing to discern what provides utility (i.e., experiencing certain mental states), while the latter refers to the content of preferences, which, of course, does not (generally) consist of having a certain mental state. The reason why both are not equivalent is that we have D2 desires. As long as we know (or believe) that D2 is not objectively satisfied, we are not going to be willing to plug into a Machine providing an illusory satisfaction of D2. (9)

Now Nozick could object to the preceding argument: assuming the existence of D2 desires is incompatible with mental-state Welfarism, since it implies our concern about not only our mental states, but also about what we really are. However, I think the kind of Welfarism Nozick wants us to endorse is too coarse, since it would be a kind of direct hedonism, i.e. a mere seek of pleasure. In my opinion, a plausible version of mental-state welfarism claims that the only relevant factor to measure people's utility from the standpoint of an external observer is their mental states. In this sense, indeed, the fact that I firmly believe I am writing a novel would yield utility, although it is a mere illusion. But this does not mean that I desire to believe deceptively that I am writing a great novel or that it is indifferent for me whether it is an illusion or not. As long as I know (or believe) it is an illusion, I will probably not assign any value to the outcoming mental state. Therefore I would not prefer to have such a (deceptive) mental state. That is why I would have strong reasons against plugging into a Machine which would create illusory mental states.

In order to clarify this point, let us suppose another example: A certain community worships a stone totem and does not want it to be ever destroyed. Because of some other reason, the community is forced to emigrate and leave the totem in its place. A Welfarist would claim that if the totem were destroyed, the utility of the community would not diminish, since they are not aware of its destruction.

Now, if we ask any member of the community about her desires: whether she does not want the totem to be destroyed or she does not want to be aware of its destruction, the member would obviously answer that she does not want the totem to be really destroyed. We cannot tell her: "don't worry, if someone destroys the totem, nobody will tell you." This proves clearly that there are two different things involved: on one hand, the content of our preferences, which usually refers to the occurrence of facts, and, on the other, the criterion to evaluate increase or decrease of utility, which, according to Welfarism, takes place when people believe that such facts really occur, although they do not occur.

Now, we could conceive the possibility that the deceit happened from the beginning (i.e., if we did not have the choice of plugging into the Machine or not). In this case, this argument would not work, since there would not be any moment in which we would know that our desires will not be actually satisfied by the Machine. Let us explore this possibility in the next Section.

III

The challenge of Nozick's example does not rest upon in whether we would plug or not into the Machine, since we would not be willing to do it (at least, it would be perfectly consistent not to plug in) even if we were mental-state Welfarists. But there may be a broader interpretation of Nozick's argument. According to it, the question is not whether we would plug in or not, but, instead, if a world W1, in which all individuals are plugged into the Machine, is better from an objective utilitarian standpoint than a world W2, in which they are not plugged in. The total quantum of utility (happiness, pleasure, desire satisfaction), even regarding D2, is a lot greater in W1 than in W2, since the individuals are persuaded that their desires are being satisfied. According to the argument pro-Nozick, claiming that we prefer W2 would entail abandoning Welfarism, since we consider valuable something else than subjective utility.

The conceptual point of W2 is that of the perfect deceit. i.e., whose truth we are not able to discover. How should a mental-state Welfarist evaluate the fact that in W1 our desires are merely subjectively satisfied, i.e., they do not have an objective counterpart. Of course, if we assume that the Machine is absolutely perfect and cannot fail, and thus, we cannot ever realize that our desire satisfaction was merely deceptive, then the Welfarist would certainly be forced to prefer W1 rather than W2. But the Welfarist has (in my opinion very strong) reasons against the relevance of such a conclusion. The main point is that claiming that the Welfarist should choose W1 is irrelevant because it does not convey any practical consequence. In other words, it does not tell us anything about how to act. Let us see why. Suppose that from an objective point of view it is true that a Welfarist would prefer W1 to W2. The only relevance of this is that if we were external, impartial, perfectly informed, and omnipotent decision-makers, then we would chose W1 rather than W2. In such case, all agents involved would believe that their experiences are real. But this barely affects what we in fact can evaluate, since we usually know (or believe we do) whether our experiences are real or deceptive, and we are not external, impartial, perfectly informed, omnipotent decision makers.

Given all these restrictions, the question can be restated in the following terms: how does a Welfarist include in his calculation both the fact that the utility (or part of it) of an individual is deceptive and the probabilities of discovering the deceit? A clear example of this is that of terminal patients, to whom we do not tell the truth, in order to spare more suffering. If their disease is really incurable and their chances of knowing the truth are very limited (for example, because X is in Intensive Care), the Welfarist would be willing to accept the deceit. This, of course, is not counter-intuitive. The greater the probabilities of discovering the truth, the lesser the welfarist tendency to deceive. And the greater the subjective importance of the desire, the lesser the tendency to deceive. The latter is very important: if the deceit has been creating the illusion that all my desires have been fulfilled, the likeliness of discovering the deceit should be almost zero, since, should I discover it, I would consider my life pointless. (10)

Of course, there is a difference between mental-state Welfarism and Nozick's deontologism, that is analogous to the difference between rule utilitarianism and deontologism. According to mental-state Welfarism, truth (absence of deceit) has an instrumental value (like norms have for rule utilitarians): they help avoiding eventual deceits, and thus maximize happiness. Instead, Nozick considers truth to have an intrinsic value: it is preferable intrinsically to know the truth, even if it is unpleasant or dreadful, than to enjoy a pleasant deceit, whether we can discover the truth or not. In my opinion, both positions have pros and cons. For example, I find Nozick's position too extreme for the case of a terminal patient. Our reluctance to accept deceit diminishes as our expectancies become more miserable: if our actual life were terribly painful, we would probably accept a certain portion of deceit, so as to lighten our suffering. Anyway, I believe the disagreement between mental-state Welfarism and Nozick's position regarding this issue is genuine. The only question which remains at this point of the discussion is to what extent the Experience Machine argument contributes to solve or to clarify this disagreement.

bluered.gif (1041 bytes)

Notes

(1) See, e.g., Griffin 1982, pp. 333-334, and 1986, p. 9.

(2) For a generic definition of Welfarism, see Sen 1979, p. 468. What I call mental state Welfarism should be contrasted with the Welfarism of objective preferences satisfaction. The latter claims that the satisfaction of preferences yields utility, provided that the desired state of affairs actually takes place. Within mental state Welfarism, we can distinguish a pure mental state Welfarism and a Welfarism of subjective satisfaction of desires. The latter does not require the objective satisfaction of desires, but only the belief that they are satisfied. A critique against the Welfarism of desire satisfaction (the critique is applied to both the pure and the subjective versions) can be found in Brandt 1979, pp. 249-253, and 1992, p. 168 ff. A specific critique against the Welfarism of objective preference satisfaction can be found in Rodríguez Larreta 1995.

(3) Nozick 1974, pp. 42-43

(4) Whenever I say "Welfarist," I mean "Mental State Welfarist."

(5) I use the verb "know" with the usual meaning of "justified true belief."

(6) Cf. Dworkin 1994, pp. 262 ff.

(7) Actually, D2 is not the only kind of second-order desires. For example, we may wish to satisfy our desires (at least some of the most important ones) by ourselves, without any help, with some effort, etc. In such cases, we would not be interested in plugging in either, even though we think (mistakenly or not) that the Machine is going to satisfy them really, because we want to satisfy our desires ourselves without the help of a Machine. That is why we would not plug into the second Machine imagined by Nozick, namely, the Transforming Machine.

(8) As I mentioned before, this rarely occurs, for example, when we want to experience certain feelings.

(9) Something similar can be said about Dworkin's position in the following text: "Suppose you had a genuine choice (which, once made, you would forget) between a life in which you in fact achieved some goal important to you, though you did not realize that you had, and a different life in which you falsely believed that you had achieved that goal and therefore had the enjoyment or satisfaction flowing from that belief. If you make the former choice, as many would, then you rank enjoyment, however described, as less important than something else."

(10) Except the case of the residual value of having been happy, which is greater or lesser according to the subjective importance of my second-order desires.

bluered.gif (1041 bytes)

 

Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved

 

Back to the WCP Homepage