Showing posts with label Epistemic Peers. Show all posts
Showing posts with label Epistemic Peers. Show all posts

Wednesday, September 2, 2015

On Disagreement, Part 5: Objections and Conclusion

"Honest disagreement is often a sign of progress"
- Mahatma Gandhi

So far, I’ve discussed three cases (which I’ll call clocks, dress, and cardiology) that illustrate what I consider to be a more widely applicable principle (p) indicating that the right thing to do upon discovering that an epistemic peer disagrees with one after full disclosure is to suspend belief. Today, I’d like to begin examining two lines of criticism of – or disagreement with – this proposition. I'll address them in reverse order. The first is that my proposition is self defeating, and I will bring this up at the end of the post. The second, which I'll deal with first, comes in the form of several other cases where epistemic peers disagree but apparently, “nearly everyone supposes that it is perfectly acceptable for one to (nevertheless) hold fast - i.e. to fail to adjust credences to bring them closer to the credences of another”(1). So let's begin by examining one of several such counter examples as described by philosopher, Graham Oppy, which he calls elementary math:

“Two people who have been colleagues for the past decade are drinking coffee at a café while trying to determine how many other people from their department will be attending an upcoming conference. One, reasoning aloud, says: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, the other says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither party to the conversation has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, each knows that none of the speech acts involved is insincere. Moreover, we may suppose, the one is feeling perfectly fine: the one has no reason to think that she is depressed, or delusional, or drugged, or drunk, and so forth. In this case, it seems plausible to suppose that the one should conclude that something has just now gone evidentially or cognitively awry with the second, and that the one should not even slightly adjust the credence she gives to the claim that 2 + 2 = 4.”

Well, of course it seems plausible to us, readers whose independent opinions agree with the one that 2 + 2 = 4, that something has gone cognitively awry with the second! Our opinions serve as additional evidence that alters the 0.5 probability that each disagreeing party is correct. Watch how re-wording the alleged counter example illustrates this reason for its failure:

You and your colleague for the past decade are drinking coffee at a deserted café while trying to determine how many other people from your department will be attending an upcoming conference. You, reasoning aloud, say: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, your colleague says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither you nor your colleague has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, you each know that none of the speech acts involved is insincere. Moreover, we may suppose, you are both feeling perfectly fine with no reason to think that you or the other is depressed, or delusional, or drugged, or drunk, and so forth.”

I hope that you agree that it no longer seems plausible to suppose that you should conclude that something has just now gone evidentially or cognitively awry with your colleague, and that you should not even slightly adjust the credence you give to the claim that 2 + 2 = 4. One of you has a problem, but as bizarre and implausible as this scenario seems, if we are to really consider it a relevant counter example to the principle I have proposed , then neither you nor your colleague can have any reason independent of the disagreement itself to think that the other is the one with that problem. Without a reason independent of your disagreement to justify maintaining your belief, no matter how confident each of you is in being correct, it seems clear to me that you must both suspend belief until further evidence (which is, in this example, easy to acquire) sorts it out. Why? Because, as I have discussed earlier, from an epistemic perspective, neither of you has any reason to think that you are more likely to be correct than the other, so the probability at that point in time that either of you is correct – again, from each of your epistemic perspectives - is 0.5. Since assent to a belief that seems no more likely to be true than false is irrational, suspension of belief is required. Oppy provides several other examples of disagreements concerning “cognitively basic judgments” (those immediately grounded in memory or perception or elementary mathematics), but I think that they all fail for similar reasons. Essentially, if you found dress convincing of my proposition (an example of a cognitively basic judgment), then Oppy’s other similar counter examples should seem pretty unconvincing.

Imagine that you wake up and all of your memories and all of your intuitions tell you that 2 + 2 does not equal 4, while every other person on Earth disagrees. You hold your belief as confidently, sincerely, and intuitively as everyone else. Every time you take two oranges and put them in a box with two other oranges, you count the total number of oranges and you never get 4, while everybody else around you always does. This would be very strange, indeed, but no matter how confident you feel that you know better, I hope you can see that you simply cannot insist that you are right; you must suspend your belief or risk suffering from a delusion. On the other hand, everybody else can draw epistemic confidence in the otherwise perfect agreement that 2+2 does equal 4. Agreement really does matter since it serves to identify what sort of criteria we can use to determine what's normal and what isn't. Here's another example: if you think that killing others for your own pleasure is fine and dandy, and you can't see any problem with that, you're not a lone champion of an obscure moral truth, you're a psychopath.

Alright: I've saved the best for last. The final challenge posed to my proposition is that there appear to exist not just my own epistemic peers, but my own epistemic superiors who disagree with (p), including Dr. Oppy and Dr. Alvin Plantinga, among others. (There are, of course, other philosophers who agree with (p) (or something like it), but the fact that there are those who disagree is the very problem.) Lest I have some way of saving (p) from self-referential defeat, even I must suspend my own belief in it.

But I do have a way of saving (p) from this challenge, at least for now. I have argued logically for (p), and the mere disagreement of epistemic peers or superiors is not enough for me to dismiss it. Recall that (p) requires disagreement despite full disclosure. The latter requires that those who disagree explain which of my premises they disagree with and why. If after such a process, I am left with no reason independent of our disagreement to think that (p) is correct, then it seems that I will have to become agnostic regarding (p) because that's precisely what (p) requires that I do. That hasn't yet happened.

One of my interlocutors on this subject rejected the notion that when n epistemic peers disagree after full disclosure, the epistemic probability that any of them are correct is 1/n, for that is precisely what the disagreement calls into question. I am sympathetic to this criticism, and I interpret it as indicating that it is sometimes very difficult to determine if those disagreeing really are epistemic peers. However, there are times when this really isn't difficult at all, such as, when large groups of people disagree about, for example, cognitively basic judgments, as was the case in dress. Cardiology is a good example of a similar case involving a cognitively non-basic judgment. Relevant epistemic differences will tend to even out among large groups, so if you saw the stripes on the dress as white and gold, and you knew that your spouse saw them as blue and black, you might wonder whether there was something wrong with your spouse visuo-neurologically, but when you realize that there are  literally thousands of people agreeing with you, and thousands of others agreeing with your spouse, it's much more clear that there is something about the situation - about the picture itself - and not with either of the two individuals or camps that is preventing rational assent to a belief about the stripe colors. But this is just to say that the more reasons one has to think that the person disagreeing really is an epistemic peer, the more one must reduce the confidence in one's belief. Interestingly enough, it seems that applying Bayesian math to disagreements among perfectly rational cognizers leads to just this conclusion.

So at least for now, and at least on those occasions where one seems compelled to conclude that the disagreement really is among epistemic peers, (p) still stands. If you know where I might encounter objections to the premises leading to (p), please link or provide references in the comments below. Or even better, explain what they are in your own words. I'm keen to hear all about your disagreement . . .

(1) Oppy, G. Disagreement. Int J Philos Relig 2010 68:183-199.

Wednesday, May 13, 2015

On Disagreement. Part 3


So far in this series, I’ve considered two straightforward instances of disagreement and argued that in each instance, the rational thing to do because of the disagreement is suspend belief (see here and here). Today, I’d like to summarize what I think are the circumstances where disagreement requires suspension of belief.

Quite simply, one should suspend belief whenever, as far as one can know (from an epistemic perspective), the probability that the belief is true is roughly equal to the probability that it is false.

Not all disagreement presents such a situation. For example, Dr. Rik Willems is an expert in the treatment of slow heart rhythm disorders with cardiac pacemakers. If a first-year medical student on her first clinical cardiology rotation thinks that a patient should have a pacemaker implanted, and Dr. Willems disagrees, the probability that Dr. Willems is right is considerably greater than the probability that the medical student is. After all, medical students are supposed to get their plans for patients vetted by attending physicians, not the other way around!

Dr. Willems and the medical student are not epistemic peers. That is, they are not in equally good positions to make judgments upon pacemaker therapy. This is not to say that just because Dr. Willems is in a superior position to make such judgments, that his opinion must be right. The rational thing for him and the student to do is explain to each other the reasons for their opinions. Maybe Dr. Willems has contracted viral encephalitis and evidence of his cognitive dysfunction will be disclosed in the conversation. More likely, however, the medical student has missed an important detail of the patient’s situation, or misinterpreted the available evidence addressing pacing in that situation. This conversation comprises a process known as “full disclosure”; it represents the best possible attempt for disagreeing parties to consider and share the reasons for their own belief and the reasons for the opposing belief. In many such instances, the reasons on one side of the disagreement will really be better and the disagreement will be resolved. We can all, medical students included, learn a great deal this way, even though not all disagreements end so educationally and amicably.

The disagreeing clocks left little to no room for consideration of which time reading was more likely to be correct. Electronic quartz clocks these days are all remarkably accurate, so these two machines are “epistemic peers”. Maybe one had suffered a power loss that the other had not. Maybe somebody spilled a Coke into the one on the night table and caused a malfunction. Or maybe steam and humidity from the adjacent shower caused a malfunction in the bathroom clock. Since the clocks can’t speak and arrive at full disclosure, it seems quite clear that the weight that one must put on the reading of each clock is about equal, and so one must suspend belief about what the time actually is.

The disagreement about the dress also leaves little to no room for consideration of which opinion is more likely to be correct. If just two individuals disagreed, they’d have at least a few things to discuss. Is one looking at the monitor from a particular angle, or in a room with a particular reflection that is affecting her perception? Is one color blind? Is one deceiving the other? But since the disagreement occurred on a global scale, all of these possibilities even out among the two disagreeing camps. Upon becoming aware of the scale of the disagreement, one really is left with no good reason to think that one perception is more likely to be correct than the other, and the rational thing to do is suspend belief. Since the weight of one perception is, as far as anyone can tell, equal to the weight of the other, the circumstances are not unlike considering a coin flip, and this is true even when both parties are disagreeing on the very private evidence of perception.

Why can’t the parties agree to disagree? For the simple reason that both parties have, in the genuine opposition of the other, a good reason to believe that their own perception is, as far as either can tell, the wrong one. Had the opposing belief resided in your own mind – a situation people sometimes find themselves in when they are torn between 2 equally strong but opposing beliefs – you’d be perfectly agnostic. The fact that the opposing belief resides in another mind is, as far as either can tell, arbitrary, and therefore not sufficient to render one belief more likely to be true.

So there we have it.  If epistemic peers disagree after full disclosure, and there remains no good reason independent of the disagreement itself to consider one belief more likely to be correct than the other, the rational thing to do is to suspend belief and try to find other information that will settle the question. If further deciding information is unavailable, either in principle or in practice, then the question will have to just remain open, and cognizers will just have to remain agnostic, at least until such new reasons are available. 

If you think about that for a moment, you should realize that if you accept it, you're going to have to suspend belief about a whole lot of things. This approach to disagreement leads to a significant amount of skepticism, though not, at least as far as I can see, the kind of sweeping philosophical skepticism that is intellectually crippling. We can still believe, for example, that a computer screen is in front of us, that Kennedy was assassinated in the sixties, that OJ was probably guilty (even if that belief isn't beyond all reasonable doubt) and that the gene is the unit of inheritance. But what should minimum wage be? What should be done about income inequality, anthropogenic global warming, and ISIS? Is Allah or Jesus God? These kinds of questions would seem to require the humble approach of agnosticism, and further argumentation, experimentation, and evidence. Sometimes, we are forced to act despite being agnostic, but notice that there's nothing wrong with taking a "best guess" when that's all that is available.

In part 4, I’ll apply this reasoning to a case of disagreement in the Cardiology community and explain how it is being addressed. Chime in now with your own disagreement and you just might find me addressing it in part 5, when I will consider some criticisms of approaching disagreement in the logical fashion I have been describing.