« Prev I. The Nature of Defeaters Next »

I. The Nature of Defeaters

Now what we need initially is some account of what a defeater, for a belief, is. The fact is there is a great deal to be said about defeaters.442442   Some of which is to be found in Michael Bergmann’s Internalism, Externalism, and Epistemic Defeat (University of Notre Dame Ph.D. dissertation, 1997). See also John Pollock’s Contemporary Theories of Knowledge (Totowa, N.J.: Rowman Littlefield, 1986), pp. 37ff.; and my unpublished paper “Naturalism Defeated.” First, however, we need some examples. As in the last chapter I see (at a hundred yards) what I take to be a sheep in a field and form the belief that there is a sheep in the field; I know that you are the owner of the field; the next day you tell me that there are no sheep in that field, although you own a dog who looks like a sheep at a hundred yards and who frequents the field. Then (in the absence of special circumstances) I have a defeater for the belief that there was a sheep in that field and will, if rational, no longer hold that belief. This is a rebutting defeater—what you learn (that there are no sheep in that field) is inconsistent with the defeated belief. But there are also undercutting defeaters. Here is an example due to John Pollock. You enter a factory and see an assembly line on which there are a number of widgets, all of which look red. You form the belief that indeed they are red. Then along comes the shop superintendent, who informs you that the widgets are being irradiated by red and infrared light, a process that makes it possible to detect otherwise undetectable hairline cracks. You then have a defeater for your belief that the widget you are looking at is red. In this case, what you learn is not something incompatible with the defeated belief (you aren’t told that this widget isn’t red); what you learn, rather, is something that undercuts your grounds or reasons for thinking it red. (You realize that it would look red even if it weren’t.) Defeaters are reasons for giving up a belief b you hold; if they are rebutting defeaters, they are also reasons for accepting a belief incompatible with b. Acquiring a defeater for a belief puts you in a position in which you can’t rationally continue to hold the belief.

Defeaters of this kind are rationality defeaters; given belief in the defeating proposition, you can retain belief in the defeated proposition only at the cost of irrationality. There are also warrant defeaters that are not rationality defeaters. Thus in Carl Ginet’s fake barn example, as you are driving through southern Wisconsin, you seem to see many fine barns. Fixing on a particular one, you say to yourself, “That is a splendid barn!” What you don’t know, however, is that the local Wisconsinites have erected many clever barn facades (from the road indistinguishable from real barns) to make themselves look more prosperous. What you are actually looking at, however, is a real 360barn, not a barn facade. Still, you don’t know that it is a barn; it is only by sheer serendipitous good fortune that the belief you form is true. (You might just as well have been looking at a barn facade—indeed, you might better have been looking at a barn facade, because the ratio of barn facades to barns in this area is 3:1.) To put the matter in the terminology of chapter 5 (above, pp. 158ff.), you are in an unfavorable cognitive minienvironment, and it is those barn facades that make the minienvironment unfavorable. The presence of the fake barns is a warrant defeater for you: given the presence of the fake barns there, you don’t know that the thing you are looking at is, indeed, a barn—even though it is and you believe that it is. The existence of the fake barns is not a rationality defeater, however, for there is nothing irrational, in your circumstances, in believing that what you see is a barn.

Defeaters depend on and are relative to the rest of your noetic structure, the rest of what you know and believe. Whether a belief A is a defeater for a belief B doesn’t depend merely on my current experience; it also depends on what other beliefs I have, how firmly I hold them, and the like. Consider, for example, the above case, where your saying that there are no sheep in the field is a defeater for my belief that I see a sheep there; this depends on my assuming you to be trustworthy, at least on this occasion and on this topic. By contrast, if I know you are a notorious practical joker especially given to misleading people about sheep, what you say will not constitute a defeater; neither will it if I am inspecting the sheep through powerful binoculars and clearly see that it is a sheep, or if there is someone I trust standing right in front of the sheep, who tells me by cell phone that it is indeed a sheep.

As a result of this relativity to noetic structure, it can happen that you and I both learn a given proposition p, that it constitutes a defeater for another belief q for me, but does not do so for you. For example, you and I both believe that the University of Aberdeen was founded in 1495; you but not I know that the current guidebook to Aberdeen contains an egregious error on this very matter. We both win a copy of the guidebook in the Scottish national lottery; we both read it; sadly enough, it contains the wholly mistaken affirmation that the university was founded in 1595. Given my noetic structure (which includes the belief that guidebooks are ordinarily to be trusted on matters like this), I thereby acquire a defeater for my belief that the university was founded in 1495; you, however, knowing about this improbable error, do not. The difference, of course, is with respect to the rest of what we know or believe: given the rest of what I believe, I now have a reason to reject the belief that the university was founded in 1495; the same does not hold for you. You already know that the current guidebook contains an error on the matter of 361the date of the university’s foundation; this neutralizes in advance (as we might put it) the defeating potential of the newly acquired bit of knowledge, that is, that the current guidebook to Aberdeen says the university was founded in 1595. So this new bit of knowledge is a defeater for that belief with respect to my noetic structure but not with respect to yours.

A defeater for a belief b, then, is another belief d such that, given my noetic structure, I cannot rationally hold b, given that I believe d. In the typical case of defeat, I will first believe b and then later come to believe the defeater d: I believe that there is a sheep in the pasture before me; then you come along with that information about the sheep dog. I believe that the widget I’m looking at is red; then the shop superintendent tells me about the irradiation by red light. Sometimes, however, I already believe the defeater (or, strictly speaking, part of the defeater), but do not initially realize its bearing on the defeatee. I believe that you were at the basketball game last night at 9:30; I also believe that you are never at a game without your husband, that Sam, whom I trust, reported seeing either Tom or your husband at a bar at that time, and that George, whom I also trust, reports that Tom was not at the bar then. This is sufficiently complicated that I might not initially see the connection between these propositions and my belief that you were at the game then. As long as I haven’t noticed the connection, I don’t have a defeater, although I do have what we might call a potential defeater. Once I see the connection, then I have a defeater: the defeater is the conjunction of those propositions, together with the proposition that if that conjunction is true, then you weren’t at the game at 9:30.

A famous similar kind of case: Frege once believed that

(F) For every condition or property P, there exists the set of just those things that have P.

Bertrand Russell wrote him a letter, pointing out that (F) has very serious problems. If it is true, then there exists the set of non-self-membered sets (because there is the property or condition of being non-self-membered). This set, however, inconsiderately fails to exist. That is because if it did exist, it would exemplify itself if and only if it did not exemplify itself; that is, it would both exemplify itself and fail to exemplify itself, which is wholly unacceptable behavior for a set. Before he realized this problem with (F), Frege did not have a defeater for it. Once he understood Russell’s letter, however, he did; and the defeater was just the fact that (F), together with the truth that there is such a condition as being non-self-membered, entails a contradiction.

We might initially try explaining the notion of a defeater as follows:


(D) D is a defeater of B for S at t if and only if (1) S’s noetic structure N (i.e., S’s beliefs and experiences and salient relations among them) at t includes B, and S comes to believe D at t, and (2) any person (a) whose cognitive faculties are functioning properly in the relevant respects, (b) whose noetic structure is N and includes B, and (c) who comes at t to believe D but nothing else independent of or stronger than D would withhold B (or believe it less strongly443443   We could use the term ‘partial defeater’ for defeaters that don’t require withholding B but do require holding it less firmly. A full treatment would explain degrees of belief (which are not to be thought of as probability judgments; see Warrant: The Current Debate, p. 118) and show how partial and full defeat are related. Here there is no space for that, but note that full defeat is really a special case of partial defeat, at least if we stipulate that coming to withhold B is a special case of coming to believe B less strongly. For the sake of brevity I’ll henceforth suppress mention of partial defeaters, although the application of what I say to them should be routine.).

The idea, roughly speaking, is that belief D is a defeater of B for you if proper function requires giving up belief B when you acquire D. This fits the above examples of defeaters rather well; still, it is nonetheless not quite what we want. To see the problem, imagine a canny Freudian replying as follows:

Consider the ‘optimistic overrider’. You are suffering from what you know to be a really serious disease; nonetheless, you believe that you will recover within six months. You then learn some statistics that accurately fit your case—statistics according to which the probability that you will recover is very low. Does your belief in these statistics give you a defeater for the belief that you will recover? One would think so; but it need not on (D). For perhaps there is a kind of mechanism that operates in these cases to maintain optimism about your chances of recovery, just because such optimism enhances the chances of recovery. That is, what proper function requires in a case like this is believing that you will recover, despite your knowledge of the statistics. What proper function requires in this case, then, is (a) the belief that those statistics are in fact accurate, but also (b) the belief that you will nevertheless recover. So the definition is incorrect. It fails to take account of the fact that not all cognitive processes are aimed at the production of true belief. Furthermore, this is directly relevant to the case of belief in God. For, as Freud pointed out, belief in God arises from wish-fulfillment, not from belief-producing processes or faculties aimed at the truth. The function of the processes that produce theistic belief is psychological health, enabling us to carry on in this otherwise grim and threatening world. The function of these processes is not to provide us with true beliefs. So suppose you are a believer in 363God, and someone provides you with a powerful argument against the existence of God—some version of the problem of evil, for example. Suppose, indeed, someone shows you that the existence of God is logically incompatible with the existence of evil. What does proper function require in that case? Well, conceivably it requires that you continue to believe in God. Even if it did, however, you would have a defeater. So the definition is faulty.

Agreed, the definition is indeed faulty.444444   Here I was greatly helped by a series of communications from William Talbott. Here we need a distinction. Say that (D) above defines the notion of a defeater simpliciter. We also need the notion of a purely epistemic defeater:

(D*) D is a purely epistemic defeater of B for S at t if and only if (1) S’s noetic structure N at t includes B and S comes to believe D at t, and (2) any person S* (a) whose cognitive faculties are functioning properly in the relevant respects, (b) who is such that the bit of the design plan governing the sustaining of B in her noetic structure is successfully aimed at truth (i.e., at the maximization of true belief and minimization of false belief) and nothing more, (c) whose noetic structure is N and includes B, and (d) who comes to believe D but nothing else independent of or stronger than D, would withhold B (or believe it less strongly).

A couple of comments. First, it is of course the addition of clause (b) that distinguishes (D*) from (D); very roughly speaking, the idea is that a purely epistemic defeater for B is a belief D that would be a defeater simpliciter for B if the only processes governing the sustaining of B were processes aimed at truth (and not, for example, at survival, or psychological comfort). The point is then that D could be a purely epistemic defeater of B even if proper function requires the maintenance of B, in S’s noetic structure, despite the formation of D; that can occur if the processes maintaining B are not aimed at truth. Second, with respect to clause (a), it is not required, of course, that all of S*’s faculties are functioning properly in every respect. For example, the fact that S*’s memory for names is defective need not be relevant. Further, it isn’t required that D itself arise rationally or by way of proper function; as I’ll argue below, it is possible for a belief that is irrationally acquired to be a defeater, even for a belief that is rationally acquired. Perhaps (D*) is a bit unwieldy; still, in practice there shouldn’t be any difficulty in applying it to the cases of interest. The above canny Freudian, therefore, will presumably hold that the theist does have a purely epistemic defeater in the facts of evil (once she reflects on the facts of evil and sees how they are related to the existence of a perfectly good God), even if she does not have a defeater simpliciter.


And she may go on to make one final claim: once you see that belief in God is not sustained by truth-aimed processes (arising, instead, from wishful thinking), then you will also have a defeater simpliciter for theistic belief. You will have this defeater in two different ways. First, once you see that the cognitive processes responsible for a given belief you hold are not aimed at the truth, and also clearly see the facts of evil, then, she claims, you will be in a situation where the rational response is to give up belief in God. For, she says, you see that you really have evidence against the existence of God, while on the other side your belief in God is without evidence and without warrant. She adds that even if we ignore evil altogether, the theist who sees that her theistic belief issues from wish-fulfillment (or any other cognitive process that is not aimed at the truth) has a defeater for that belief. Merely seeing that the sources of a given belief are not aimed at truth (but at some other desideratum such as survival, psychological welfare, or the ability to carry on in this hostile and indifferent world) is sufficient (in the absence of other evidence), she says, to give you a defeater simpliciter for that belief. The rational response, once you see the source of such a belief, is to give up the belief.

What further conditions (if any) must a defeater belief meet? In particular, must such a belief itself be warranted or rationally formed? Suppose I hold a belief B, but then come to accept a belief D that goes against B in some way, where this belief D I accept has no warrant. Can it still be a defeater for B? I should think so. I’ve believed for years that you were born in Yankton, South Dakota; this belief has a good deal of warrant for me. (I was told this by your uncle, whom I know to be a generally reliable person.) One day, however, you tell me in all seriousness that you were born not in Yankton, but in New Haven (and you add some story as to why your uncle thinks you were born in Yankton). Then (under normal circumstances) I have a rationality defeater for my belief that you were born in Yankton. If there are no special circumstances (if I have no reason to think you were joking, or trying to deceive me, or are misinformed about where you were born, or the like), the rational response would be to give up the belief that you were born in Yankton. Suppose, however, the fact is you yourself were misinformed by your parents; you actually were born in Yankton, but for reasons having to do with academic prestige your parents tell you that you were born in New Haven. Then your belief that you were born in New Haven has little or no warrant. That is because (as I argued in Warrant and Proper Function, pp. 83ff.) a belief acquired by way of testimony has warrant for the testifiee only if it has warrant for the testifier; because your parents don’t even hold this belief, it is not among their warranted beliefs. Hence my newly acquired belief that you were born in New Haven also lacks warrant. Nevertheless, this belief still gives me a defeater for my old belief that you were born in Yankton. So it is quite possible for a belief A to serve as a defeater for another belief B even if A has little or no warrant, and even when B has more warrant than A.


But what if the potential defeating belief is acquired irrationally? Can it still be a defeater? Suppose I’ve always thought you a genial sort who is rather well disposed to me. Unhappily, I start sinking into a paranoid condition; because of cognitive malfunction, it comes to seem to me that you are, in fact, trying to harm me by destroying my academic reputation. Because of the cognitive malfunction, this just seems wholly obvious to me; it has a great deal of what I have been calling ‘doxastic evidence’. Can my belief D that you are trying to destroy my reputation serve as a defeater for my belief B that you are favorably disposed toward me?

Here we must recall the distinction between internal and external rationality. Internal rationality is a matter of proper function ‘downstream from experience’ (including doxastic experience: see above, pp. 110ff.). Given my experience, I am internally rational just if I form the right beliefs in response to that experience. What internal rationality requires, therefore, is the appropriate doxastic response to experience, including doxastic experience. For present purposes, we may think of internal rationality as also including epistemic justification, being within one’s epistemic rights, having flouted no epistemic duties or obligations. External rationality, by contrast, is a matter of the proper function of the sources of experience, including, in particular, the sources of doxastic experience. External irrationality can arise in several ways. For example, it can happen by way of impedance. I write a book on topic X; because of pride and egoism, I think it easily the best book on X, even though your book on X is better, and even though I would have recognized that fact had it not been for the way in which my pride has impeded the proper function of the relevant rational faculties. This is a case of external irrationality: the problem is that, because of my pride and arrogance, my book just seems to me much better than yours; the proposition that it is better has, for me, a great deal of doxastic evidence. In a case like this, therefore, my irrationally formed belief can give me a defeater, I think, for my previous and rationally formed belief that your book is the best book on X.

Return now to the case of paranoia: I think we see a similar situation. My belief that you are out to get me is externally irrational; it arises from sources of doxastic experience that are not functioning properly. By virtue of their malfunction, however, my experience is such that I am powerfully impelled to believe D, that you are trying to ruin me. This now seems to me much more obvious than that you are favorably disposed toward me: the doxastic evidence for D is much stronger than that for B. What internal rationality calls for, under those circumstances, therefore, is my giving up B; I have a defeater for it in D, even though D is arrived at irrationally. I can therefore have a defeater D for a belief B even where B is rationally held and D is irrationally acquired.

There is still another way in which I can acquire a defeater by way of irrationality. Suppose I believe B, but by virtue of cognitive malfunction do not believe it nearly as strongly as rationality requires; it isn’t nearly as resistant to the challenge of other beliefs as it should be. For example, due to cognitive malfunction (a brain lesion, perhaps) I am arithmetically challenged: like everyone else, I believe that 2 + 1 = 3, 366but no more strongly than I believe, for example, that my wife’s social security number is n. You, a mathematics professor whom I trust, tell me that as a matter of fact it is false that 2 + 1 = 3. I take your word for it, just as I might believe the government expert who informs me that my wife’s social security number really isn’t n (there was some kind of mix-up when she lost her card and applied for a new one). This then gives me a defeater for my belief that 2 + 1 = 3. I have this defeater for this belief, however, only because of failure of cognitive proper function, only because the doxastic evidence for me for 2 + 1 = 3 isn’t nearly as strong as proper function requires.

« Prev I. The Nature of Defeaters Next »
VIEWNAME is workSection