20th World Congress of Philosophy Logo

Theory of Knowledge

Coherence and Epistemic Rationality

Susan Vineberg
Wayne State University
susan.vineberg@wayne.edu

bluered.gif (1041 bytes)

ABSTRACT: This paper addresses the question of whether probabilistic coherence is a requirement of rationality. The concept of probabilistic coherence is examined and compared with the familiar notion of consistency for simple beliefs. Several reasons are given for thinking rationality does not require coherence. Finally, it is argued that incoherence does not necessarily involve fallacious reasoning.

bluered.gif (1041 bytes)

Most work in epistemology treats epistemic attitudes as bivalent. It is assumed that a person either believes that there is an apple on the table, or that there is not, and that such beliefs must be either warranted or unwarranted. However, a little reflection suggests that it is reasonable to have degrees of confidence in a proposition when the available evidence is not conclusive. The rationality of such judgments, formed in response to evidence, will be my concern here.

Degrees of confidence have mainly been discussed by Bayesians as part of a general theory of rational belief and decision. Bayesians claim that rational degrees of confidence satisfy the standard Kolmogorov axioms of probability:

1. Pr(A) = 0

2. If A is a tautology, then Pr(A) =1

3. If A and B are mutually exclusive, then Pr(A v B) = Pr (A) + Pr(B).

It should be observed that people do not generally assign point values to propositions, which is required if their degrees of confidence are to conform to the axioms. Moreover, it is doubtful that an assignment of point values to propositions is usually reasonable, since it seems that our evidence rarely justifies such precision. Such vague degrees of confidence can be treated somewhat more realistically, as interval valued, by associating them with sets of probability functions. For simplicity, I will take degrees of belief here as point valued in my discussion here.

The claim that degrees of confidence should satisfy the probability axioms is most often defended by appealing to the so-called Dutch Book argument, which was first presented by Ramsey in his famous paper "Truth and Probability". The idea is that degrees of belief that do not satisfy the probability axioms (commonly termed incoherent) are associated with betting quotients that can be exploited by a clever bookie to produce a sure loss. Ramsey held that an agent's degrees of belief can be measured roughly by the bets that she is willing to accept. If they are incoherent, there will be a series of bets, each of which she will be willing to accept, but which are certain to result in a net loss for her. Such a collection of bets is called a Dutch Book, and it is often claimed that it is irrational for someone to have degrees of belief which could lead to having a Dutch Book being made against them. (1)

Numerous objections have been raised against the claim that it is irrational to have degrees of belief (or degrees of confidence) that are incoherent, because they leave a person vulnerable to a Dutch Book. Among these include the points that incoherence doesn't necessarily involve Dutch Book vulnerability and that such vulnerability need not be irrational. (2) However, it has been suggested recently by David Christensen, Howson and Urbach and Brian Skyrms (Christensen 1991), (Howson and Urbach 1993), (Skyrms 1987), among others, that the Dutch Book argument is misunderstood if it is thought to work by forcing compliance with the probability axioms as a means of avoiding monetary loss. Instead, they claim that, Dutch Book vulnerability should be seen as a symptom of a kind of inconsistency. For example, Christensen writes;

Dutch Book vulnerability is philosophically significant because it reveals a certain inconsistency in some system of beliefs, an inconsistency which itself constitutes an epistemic defect. (Christensen 1991)

The idea is that incoherent degrees of confidence are supposed to be analogous in important respects to the familiar notion of inconsistency for full beliefs. My purpose here will be to examine this notion of incoherence and discuss why it has been thought to be associated with a kind of irrationality.

As in the case of full beliefs, it is not necessarily irrational to have inconsistent degrees of confidence. If we think of rationality as concerned with behaving in a way so as to achieve one's goals (means/ends rationality), then it can be rational to have inconsistent beliefs, if a big enough prize will be obtained by having such beliefs. For example we might suppose that an individual will be given a million dollars if she manages to have inconsistent beliefs. Although it might be tempting to suppose that having inconsistent beliefs is only pragmatically rational here, and not epistemically rational, this need not be the case since the prize could be an epistemic one.

Still, we might regard coherence as a general norm, which a rational agent should satisfy, provided that no special prize is attached to violating the norm. Indeed, insofar as the goal is to have accurate degrees of confidence, coherence is a kind of minimal requirement of rationality. What does it mean here to have accurate degrees of confidence? In the case of full belief, our beliefs are accurate just in case the propositions believed are actually true. Since a set of beliefs is said to be consistent if and only if it is possible for the propositions believed to be true, consistency is a necessary condition for satisfying the goal of having true beliefs. Partial beliefs might be said to be accurate just in case they match certain objective probabilities. If we can make sense of the accuracy of partial belief in terms of correspondence to certain objective probabilities, then consistency for partial beliefs could be defined in terms of the possibility of a set of degrees of confidence matching those objective probabilities.

One way of understanding what it is for personal probability judgments to be objective invokes the concept of calibration. To explain the concept of calibration suppose that you turn to the Weather Channel and the forecaster announces that there is a 60% chance of snow showers for the New York area. The forecaster is said to be perfectly calibrated just in case

The proportion of days with snow showers among those when the forecaster predicts a 60% chance of snow showers is 60%.

The concept of calibration provides a way of making sense of the correctness of probability judgments, in cases where probabilities can also be interpreted as relative frequencies. Incoherence precludes the possibility of perfect calibration, so coherence is necessary for correct probability judgments in this sense. We may wonder though whether perfect calibration is a general epistemic goal. It seems to be in cases where the judgment concerns a proposition describing an event that belongs to a class of essentially similar events, for which frequencies can be obtained. However, for most propositions, such as the claim that there are leptoquarks, there will be no natural reference class to use in making calibrations. In cases where there is no nonarbitrary reference class, it is unclear that it is really an epistemic goal at all to have degrees of confidence that are perfectly calibrated.

Of course there may be other possibilities for construing the accuracy of such judgments, but Bayesians have generally regarded this task as hopeless. Still, even if there is no clear sense in which confidence can always be understood as being objectively correct, there are some propositions for which judgments of confidence can readily be understood as optimal on objective grounds. Regardless of how the objective probabilities to which our degrees of confidence should ideally correspond are interpreted, the probability for necessary truths will be one and that of their denials will be zero. This raises a problem for the view that rationality typically requires coherence. Assuming that we can safely put aside worries about Descartes' evil demon, we can expect that a rational person will have full confidence in simple propositions such as 2 + 2 = 4. But, what about more complex mathematical truths, whose proofs are very long and involved, such as Fermat's last theorem? It is true that its objective probability is one, but was it a defect of rationality to be less than fully confident before a proof was finally found? Indeed, for some time after the proof was first presented, mathematicians were optimistic that a proof had been found, but because the proof was so long and complicated they were less than certain that the proof was correct. Or consider Goldbach's conjecture, which has still eluded proof. The theorem has been shown to be true for very large numbers and has eluded counterexamples. This is surely very strong, yet inconclusive, evidence of its truth, and so would seem to warrant assigning high probability. If rationality requires satisfying the probability axioms and the theorem is true, then we must assign it probability one now, but this would involve being fully confident without sufficient evidence. On the contrary, rationality would seem to preclude full confidence in cases where the evidence is incomplete.

One might try to dispense with such examples by pointing out that the probability axioms, as generally presented, only require that all tautologies receive probability one. Probability functions are defined over Boolean algebras of propositions, where an assignment of probability one is required of those propositions with a particular structural role in the algebra. So, necessary propositions that are not tautologous could be represented in a Boolean algebra by a non-tautologous proposition in the algebra. A probability function could then be defined on the algebra in which those necessary propositions are given a probability less than one. (3) Although this maneuver allows for less than maximal assignments of probability to necessary propositions, which can be useful in modeling decisions, it does not solve the problem at hand. To see this, we must consider how the Dutch Book argument is used to show that having a degree of confidence in a tautology that is less than one is taken as involving a kind of inconsistency. Suppose that a person has a degree of belief x in a tautology T, where x < 1. According to the argument, the bet that pays $1 if T is true and 0 otherwise for a cost of $x, should look fair to such a person. She should be willing to sell this bet; however, this will result in a loss for her of $(x-1), since any bet against T must lose. This is taken as revealing inconsistency, because a bookie can take advantage of such an agent simply by examining her degrees of confidence. Since a bookie can take advantage of someone who has less than full confidence in a non-tautologous necessary truth in exactly the same way, if Dutch Book vulnerability shows that an agent has a rational defect if she does not having full confidence in a tautology, it also shows that she is irrational if she fails to be fully confident of every necessary truth.

Another response to the examples where agents are incoherent due to lack of logical knowledge is that this results from our limited abilities; nevertheless, ideally rational agents are coherent. This idea has been recently criticized by Richard Foley and Alvin Plantinga (Foley 1992; Foley 1993; Plantinga 1993). Both charge that even if ideally rational agents, who are logically omniscient, would satisfy coherence, this does not entail anything about what is rational for creatures such as ourselves, who are not logically omniscient.

Recently, Lyle Zynda (Zynda 1996) has provided an interesting defense of the claim that coherence is a rational ideal, in response to Foley and Plantinga's criticisms . First, Zynda points out that coherence is a weaker constraint than Foley and Plantinga suggest, since it need not involve logical omniscience, but only logical infallibility. Second, Zynda argues that coherence has normative relevance for us in several respects, perhaps most importantly because it provides a basis for judgments about better and worse opinions. (4) I will consider these points in turn.

Bayesians are frequently criticized, and often dismissed, for making the unrealistic assumption that agents consider and attach a probability to every proposition in advance of any empirical inquiry. However, none of the central Bayesian principles such as choosing so as to maximize expected utility, or updating beliefs by conditionalization requires the unrealistic assumption that an agent be fully opinionated. Still, it will not do to avoid the demands of coherence by failing to form opinions, even if we can exercise the control necessary to do so. Although Bayesians have had little to say about what constitutes rational opinion beyond satisfying coherence, most would agree that rationality requires taking account of the evidence. Avoiding having opinions conflicts with this demand. Another difficulty with refraining from forming opinions based on incomplete evidence is that such opinions play an important role in obtaining further evidence and expanding our knowledge. Careful consideration of partial evidence can point the way to uncovering further evidence and plays a crucial role in our decisions about what to look for. For example, the overwhelming numerical evidence for Fermat's Last Theorem made it reasonable to look for a proof, rather than a disproof, and such a complicated proof could not have been found without a concerted effort to find one.

Zynda's point that coherence provides a standard against which we can measure our opinions is certainly right, at least in that it is an epistemic goal to know the truths of logic, and to have, at least in the long run, opinions that reflect those truths. This means that, in the long run, we should want to have full confidence in each logical or necessary truth. More generally, we should want our opinions to satisfy the probability axioms, because only under these conditions will they fully respect the underlying logic of propositions. However, I am unconvinced that our failure to meet this goal now, is a failure of rationality. Zynda suggests that it is because he thinks that in having incoherent degrees of confidence "we have made an implicit logical error".   The idea here is that being incoherent not only involves an error of omission, in not knowing all of the logical truths, but actually involves an error of commission, in having committed an error of reasoning in forming one's opinions. It is this latter idea that I will subject to scrutiny in the remainder of the paper.

Does Incoherence Involve an Error of Reasoning?

In order to examine whether incoherence involves committing a fallacy, it will help to consider the details of the argument that violation of the probability axioms involves a kind of inconsistency. I will consider the version of the argument given by Howson and Urbach, in their book Scientific Reasoning: The Bayesian Approach (Howson and Urbach 1993), since they present the argument in a way that tries to bring out the analogy between incoherence and the ordinary notion of consistency.

Several definitions are needed to present their version of the argument. First, they define an agent's subjectively fair odds as those odds which, as far as the agent can tell, confer no positive advantage or disadvantage for either side. Betting quotients are then defined in the usual way in terms of odds, so that if a person's subjectively fair odds on h are q, then her subjectively fair betting quotient for h is q/(q +1). Howson and Urbach then associate the agent's degrees of belief with her subjectively fair betting quotients. Objectively fair odds are defined as those odds which, in fact, do not confer an advantage to either side.

With these definitions in place Howson and Urbach appeal to the Dutch Book theorem, which states that

if a set of betting quotients do not satisfy the probability axioms, there is a series of bets in accordance with those betting quotients that is bound to lose.

Since a set of betting quotients which is bound to lose must confer a advantage to one side, they cannot all be fair. Hence, it follows from the Dutch Book theorem that a set of betting quotients which does not satisfy the probability axioms cannot all be fair. Finally, since an agent's degrees of belief have been associated with the betting quotients she thinks fair, the Dutch Book theorem shows that if those degrees of belief do not satisfy the axioms, then despite the fact the agent thinks they correspond to fair betting quotients, they cannot do so.

Consider again the person who is confident to degree x in Goldbach's Conjecture, where 0 < x < 1. Since Goldbach's conjecture is either necessarily true or necessarily false, its fair betting quotient is either 0 or 1 and so cannot be x. On Howson and Urbach's analysis, she takes x to be the fair betting quotient for Goldbach's Conjecture, when it cannot be. She then believes a proposition p , namely that the fair betting quotient is x, yet p is logically false and so she has committed a logical error. But, why must we assume that since her degree of confidence for Goldbach's Conjecture is x that she regards x as its fair betting quotient? We can assume that she knows that every mathematical proposition is either necessarily true or necessarily false. Accordingly, she may well know that x cannot be a fair betting quotient. Yet, apparently she may still be confident in Goldbach's Conjecture to degree x. What this seems to show is not that failure to satisfy the probability axioms involves making a logical error in reasoning from the evidence, but rather that degrees of confidence are not always taken as fair betting quotients.

If I have inconsistent full beliefs, then I have committed a logical error, since I then believe that each of a set a propositions is true, when they cannot all be true. In formulating a definition of consistency for partial beliefs, Bayesians take degrees of confidence to be associated with subjectively fair betting quotients. Fairness, then plays an analogous role to truth in the definition of consistency for full beliefs. The problem is that having a degree of confidence x in a proposition does not entail taking x as its fair betting quotient, whereas believing p does mean taking p as true. Furthermore, as seen above, I can apparently be confident of p to degree x, yet think x is not a fair betting quotient for x. In such cases, incoherence, does not involve the explicit logical error in formulating our opinions that some Bayesians have claimed.

bluered.gif (1041 bytes)

Notes

(1) For details on how a Dutch Book can be constructed against someone whose degrees of belief violate the axioms, see Skyrms' Choice and Chance (Skyrms 1975). For the argument that it is irrational to have degrees of belief which do not satisfy the axioms see Jackson and Pargetter's, "A modified Dutch Book Argument" (Jackson and Pargetter 1976)

(2) See for example (Adams and Rosenkrantz 1980),(Kennedy and Chihara 1979), (Maher 1993).

(3) Garber used this maneuver to extend the Bayesian conditionalization model of belief change to accommodate logical learning, as a way of solving the so-called problem of old evidence (Garber 1983).

(4) Zynda goes on to provide the important contribution of showing how one can make sense of the notion of closer approximations to coherence.

References

Adams, E. W. and R. D. Rosenkrantz. (1980). "Applying the Jeffrey Decision Model to Rational Betting and Information Acquisition." Theory and Decision 12: 1-20.

Christensen, D. (1991). "Clever Bookies and Coherent Beliefs." The Philosophical Review C: 229-247.

Foley, R. (1992). "Being Knowingly Incoherent." Nous 26: 181-203.

Foley, R. (1993). Working Without a Net. Oxford University Press.

Garber, D. (1983). Old Evidence and Logical Omniscience in Bayesian Confirmation Theory. Testing Scientific Theories.

Minneapolis.

Howson, C. and P. Urbach. (1993). Scientific Reasoning: The Bayesian Approach. La Salle, Illinois, Open Court.

Jackson, F. and R. Pargetter. (1976). "A Modified Dutch Book Argument." Philosophical Studies 29: 403-407.

Kennedy, R. and C. Chihara. (1979). "The Dutch Book Argument: Its Logical Flaws, Its Subjective Sources." Philosophical Studies 36: 19-33.

Maher, P. (1993). Betting on Theories. Cambridge, Cambridge University Press.

Plantinga, A. (1993). Warrant: The Current Debate. New York, Oxford University Press.

Skyrms, B. (1975). Choice and Chance. Belmont Ca., Wadsworth.

Skyrms, B. (1987). Coherence. Scientific Inquiry in Philosophical Perspective. Pittsburgh, University of Pittsburgh Press.

Zynda, L. (1996). "Coherence as an Ideal of Rationality." Synthese.

bluered.gif (1041 bytes)

 

Back to the Top

20th World Congress of Philosophy Logo

Paideia logo design by Janet L. Olson.
All Rights Reserved

 

Back to the WCP Homepage