Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Wednesday, November 25, 2015

"Atheists Lack Belief in God" is a Deepity

Many atheists are fond of saying that they "lack belief in God". Unfortunately, this is a vague phrase that can be read in two ways. It's widely accepted and has been the source of much confusion because it is what Daniel Dennett has coined, a deepity. I have written about deepities before here and here.

A deepity is a phrase that balances precariously between two interpretations. On one reading, the phrase is true, but trivially so. On the second reading, the phrase would be profound if it were true, but that second interpretation is actually false. Somehow, the truth of the first reading seems to rub off on the second one, making it seem profound and true.

Here's Dennett's explanation:


 
Let's face it, most atheists think that God's existence is more likely false than true. What else could it possibly mean when they quote their wise sage, Carl Sagan, and tell believers that "extraordinary claims require extraordinary evidence"? To say that they lack belief is, in that case, trivially true. It's trivially true because failure of ascent to the claim "theism is more likely true than false" is entailed by (even mildly) holding the opposite belief.

But the phrase "atheists lack belief in God" can be also be interpreted as follows: atheists have no opinion on whether theism is true or false. Now if that's true, then there's a very profound implication that atheist's seem to love: they have no burden of proof.

Oooooh.

Deep.

Unfortunately, for atheists, it's false*.

Deepities are beguiling, but fallacious. Atheists, who normally take great pride in avoiding fallacies of reasoning, would do well to avoid this deepity and do something that should come easily to those who so strongly endorse rationality: they should take on the burden to defend exactly what they believe.


Philosopher Dan Dennett
*Folks for whom the latter interpretation is true include those who haven't thought about it enough, like a baby, but who in their right mind would call such a person an atheist? Isn't the term supposed to pack even a little bit of a punch? People commonly known as agnostics also have no burden to support the notion that God's existence is more likely false than true, but they do have a burden of rejoinder.

Wednesday, September 2, 2015

On Disagreement, Part 5: Objections and Conclusion

"Honest disagreement is often a sign of progress"
- Mahatma Gandhi

So far, I’ve discussed three cases (which I’ll call clocks, dress, and cardiology) that illustrate what I consider to be a more widely applicable principle (p) indicating that the right thing to do upon discovering that an epistemic peer disagrees with one after full disclosure is to suspend belief. Today, I’d like to begin examining two lines of criticism of – or disagreement with – this proposition. I'll address them in reverse order. The first is that my proposition is self defeating, and I will bring this up at the end of the post. The second, which I'll deal with first, comes in the form of several other cases where epistemic peers disagree but apparently, “nearly everyone supposes that it is perfectly acceptable for one to (nevertheless) hold fast - i.e. to fail to adjust credences to bring them closer to the credences of another”(1). So let's begin by examining one of several such counter examples as described by philosopher, Graham Oppy, which he calls elementary math:

“Two people who have been colleagues for the past decade are drinking coffee at a café while trying to determine how many other people from their department will be attending an upcoming conference. One, reasoning aloud, says: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, the other says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither party to the conversation has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, each knows that none of the speech acts involved is insincere. Moreover, we may suppose, the one is feeling perfectly fine: the one has no reason to think that she is depressed, or delusional, or drugged, or drunk, and so forth. In this case, it seems plausible to suppose that the one should conclude that something has just now gone evidentially or cognitively awry with the second, and that the one should not even slightly adjust the credence she gives to the claim that 2 + 2 = 4.”

Well, of course it seems plausible to us, readers whose independent opinions agree with the one that 2 + 2 = 4, that something has gone cognitively awry with the second! Our opinions serve as additional evidence that alters the 0.5 probability that each disagreeing party is correct. Watch how re-wording the alleged counter example illustrates this reason for its failure:

You and your colleague for the past decade are drinking coffee at a deserted café while trying to determine how many other people from your department will be attending an upcoming conference. You, reasoning aloud, say: ‘Well, John and Monima are going on Wednesday, and Karen and Jakob are going on Thursday, and, since 2 + 2 = 4, there will be four other people from our department at the conference’. In response, your colleague says: ‘But 2+2 does not equal 4’. Prior to the disagreement, neither you nor your colleague has any reason to suppose that the other is evidentially or cognitively deficient in any way; and, we may suppose, you each know that none of the speech acts involved is insincere. Moreover, we may suppose, you are both feeling perfectly fine with no reason to think that you or the other is depressed, or delusional, or drugged, or drunk, and so forth.”

I hope that you agree that it no longer seems plausible to suppose that you should conclude that something has just now gone evidentially or cognitively awry with your colleague, and that you should not even slightly adjust the credence you give to the claim that 2 + 2 = 4. One of you has a problem, but as bizarre and implausible as this scenario seems, if we are to really consider it a relevant counter example to the principle I have proposed , then neither you nor your colleague can have any reason independent of the disagreement itself to think that the other is the one with that problem. Without a reason independent of your disagreement to justify maintaining your belief, no matter how confident each of you is in being correct, it seems clear to me that you must both suspend belief until further evidence (which is, in this example, easy to acquire) sorts it out. Why? Because, as I have discussed earlier, from an epistemic perspective, neither of you has any reason to think that you are more likely to be correct than the other, so the probability at that point in time that either of you is correct – again, from each of your epistemic perspectives - is 0.5. Since assent to a belief that seems no more likely to be true than false is irrational, suspension of belief is required. Oppy provides several other examples of disagreements concerning “cognitively basic judgments” (those immediately grounded in memory or perception or elementary mathematics), but I think that they all fail for similar reasons. Essentially, if you found dress convincing of my proposition (an example of a cognitively basic judgment), then Oppy’s other similar counter examples should seem pretty unconvincing.

Imagine that you wake up and all of your memories and all of your intuitions tell you that 2 + 2 does not equal 4, while every other person on Earth disagrees. You hold your belief as confidently, sincerely, and intuitively as everyone else. Every time you take two oranges and put them in a box with two other oranges, you count the total number of oranges and you never get 4, while everybody else around you always does. This would be very strange, indeed, but no matter how confident you feel that you know better, I hope you can see that you simply cannot insist that you are right; you must suspend your belief or risk suffering from a delusion. On the other hand, everybody else can draw epistemic confidence in the otherwise perfect agreement that 2+2 does equal 4. Agreement really does matter since it serves to identify what sort of criteria we can use to determine what's normal and what isn't. Here's another example: if you think that killing others for your own pleasure is fine and dandy, and you can't see any problem with that, you're not a lone champion of an obscure moral truth, you're a psychopath.

Alright: I've saved the best for last. The final challenge posed to my proposition is that there appear to exist not just my own epistemic peers, but my own epistemic superiors who disagree with (p), including Dr. Oppy and Dr. Alvin Plantinga, among others. (There are, of course, other philosophers who agree with (p) (or something like it), but the fact that there are those who disagree is the very problem.) Lest I have some way of saving (p) from self-referential defeat, even I must suspend my own belief in it.

But I do have a way of saving (p) from this challenge, at least for now. I have argued logically for (p), and the mere disagreement of epistemic peers or superiors is not enough for me to dismiss it. Recall that (p) requires disagreement despite full disclosure. The latter requires that those who disagree explain which of my premises they disagree with and why. If after such a process, I am left with no reason independent of our disagreement to think that (p) is correct, then it seems that I will have to become agnostic regarding (p) because that's precisely what (p) requires that I do. That hasn't yet happened.

One of my interlocutors on this subject rejected the notion that when n epistemic peers disagree after full disclosure, the epistemic probability that any of them are correct is 1/n, for that is precisely what the disagreement calls into question. I am sympathetic to this criticism, and I interpret it as indicating that it is sometimes very difficult to determine if those disagreeing really are epistemic peers. However, there are times when this really isn't difficult at all, such as, when large groups of people disagree about, for example, cognitively basic judgments, as was the case in dress. Cardiology is a good example of a similar case involving a cognitively non-basic judgment. Relevant epistemic differences will tend to even out among large groups, so if you saw the stripes on the dress as white and gold, and you knew that your spouse saw them as blue and black, you might wonder whether there was something wrong with your spouse visuo-neurologically, but when you realize that there are  literally thousands of people agreeing with you, and thousands of others agreeing with your spouse, it's much more clear that there is something about the situation - about the picture itself - and not with either of the two individuals or camps that is preventing rational assent to a belief about the stripe colors. But this is just to say that the more reasons one has to think that the person disagreeing really is an epistemic peer, the more one must reduce the confidence in one's belief. Interestingly enough, it seems that applying Bayesian math to disagreements among perfectly rational cognizers leads to just this conclusion.

So at least for now, and at least on those occasions where one seems compelled to conclude that the disagreement really is among epistemic peers, (p) still stands. If you know where I might encounter objections to the premises leading to (p), please link or provide references in the comments below. Or even better, explain what they are in your own words. I'm keen to hear all about your disagreement . . .

(1) Oppy, G. Disagreement. Int J Philos Relig 2010 68:183-199.

Friday, June 5, 2015

On Disagreement. Part 4

"Who shall decide, when doctors disagree, and soundest casuists doubt, like you & me?"
-Alexander Pope

Before Angioplasty and After
Duncan was 67 years old when he had his first heart attack. Squeezing chest tightness came on Friday evening while watching ‘Wheel of Fortune’ and didn’t go away when the puzzle was solved. His wife called 9-1-1, and though the discomfort eased off a bit when the paramedics gave him a few puffs of nitroglycerine, they insisted on bringing him to the hospital. His EKG showed signs of a heart attack and an emergency angiogram showed that one of his three main heart arteries was abruptly and completely blocked, depriving valuable heart muscle downstream from much needed blood flow. The discomfort in his chest finally went away when the small balloon his cardiologist had positioned at the blockage was deflated, revealing that the clot that had been blocking the artery had broken up, restoring the flow of blood. A second inflation of a balloon was required to place a metal stent in the artery to help keep it open over time.

Duncan was lucky: the clot that formed in his heart artery and blocked it off had only been present for a few hours, so not much heart muscle damage had occurred. Untreated, a large territory of heart muscle would have been damaged, leaving him with impaired pumping function, a large scar, and a significant risk of life threatening heart rhythm problems. But the prompt opening of the blocked vessel had averted all of that. In a heart attack, time is muscle.

But Duncan wasn’t completely out of the woods.

His angiogram also showed that one of the other two main heart arteries had an 80-90% narrowing in it. Unlike the clot that blocked off his artery in a few minutes when the contestant bought a vowel, this narrowing was the result of cholesterol deposits in the artery wall that had been slowly building up over years. Interestingly enough, it hadn’t caused him any obvious problems, though in retrospect, he had had some of that chest discomfort before when playing with his grand children, which he’d attributed to indigestion.

Cardiologist #1 admitted Duncan to the coronary care unit (CCU) and told him that some time the following week, this other narrowing should also be treated with a balloon and stent, and that made good sense to Duncan, so he approved.

On Monday morning, cardiologist #2 took over the care of the CCU patients for that week, and after reviewing Duncan’s chart, informed him that the plan would be to continue treating him with a variety of new (to him) medications that had all been shown in randomized trials to reduce his risk of subsequent heart attacks and death. Only if he had problems with chest pain that these medications couldn’t prevent, would he undergo angioplasty and stenting of the remaining 80-90% narrowing. Cardiologist #2 explained to Duncan that other randomized trials comparing angioplasty to treatment with medication hadn’t shown an improvement in survival or reductions in heart attacks when stable patients with one narrowed artery were examined. Why undergo the small but real risks of having a second angioplasty procedure if no obvious benefit seemed likely? Besides, if the artery continued to cause problems despite medication in the future, it could always be treated with angioplasty then.

Duncan regrettably agreed with the new plan, and as soon as cardiologist #2 left the room, he called his nurse with a few questions.

“Do these doctors know what the hell they’re doing? ... How come the first doc said that I should have an angioplasty and the second doc said that I should just take pills? ... What are these people’s credentials?"

The nurse explained that cardiologist #1 was the director of the hospital’s angioplasty program and was recognized as a researcher and leader in the field both nationally and internationally. Cardiologist #2 was the Chief of the Cardiology Department, and the Director of the CCU. She was a co-author of the National Guidelines for the treatment of heart attack victims. He explained that both had many years experience looking after patients like Duncan, and that it wasn’t uncommon for experienced and thoughtful cardiologists to disagree about the best treatment for a given patient. He advised Duncan to make his decision about whether to undergo angioplasty or medical treatment on the basis of his personal values, not the current state of the evidence. Does he prefer the idea of taking medicine, which is simple to do? Or does he prefer taking fewer medicines without minding too much about the risk of another invasive cardiac procedure?

But Duncan couldn’t accept that advice. He wanted to do what was best, not what he seemed to prefer for other reasons.

Cardiologists #1 & #2 represent the larger cardiology community on the question raised by Duncan’s situation. On the basis of their interpretation of the available evidence and experience, some recommend opening the remaining narrowings during the initial stay after a heart attack, while others, on the basis of their experience and interpretation of the same evidence, recommend treatment with medicine and opening the narrowing only if further problems arise down the road. What should Duncan do? What should Cardiologists #1 & #2 do? What should the Cardiology community do?

Should Cardiologists #1 & #2 just continue offering their advice to every patient like Duncan that they see? Should Duncan just flip a coin? Should he get a third opinion?

Isn't it obvious? It certainly was obvious to Duncan! They don’t know the answer and further evidence is required to sort the problem out. In this case, Duncan was lucky, because the cardiology community had recognized that there was, regarding the question posed by his circumstances, a condition known as clinical equipoise. This means that the community had come to the conclusion that they ought to suspend their belief because they just don't know. In fact, a randomized clinical trial (RCT) was developed and was enrolling patients just like Duncan to either medical treatment or angioplasty of the remaining narrowing and following subjects closely for the next 4 years to determine which strategy better improved survival, reduced heart attacks, and improved quality of life. The experiment aimed to recruit almost 4,000 patients.

Some philosophers disagree with the approach to disagreement that I have been arguing for so far. They conclude that it’s perfectly rational for Cardiologst #1 & #2 to disagree. But if that’s true, then it’s perfectly acceptable for each to continue treating patients as they rationally believe. If, say, Cardiologist #1 is rational to believe that the best treatment for Duncan is angioplasty, then it’s unethical for her to enroll Duncan in the trial and expose him to a 50% chance of not getting the treatment that she rationally believes is best for him, and vice versa for Cardiologist #2. If every member of the cardiology community maintained their belief in this fashion, none of them would be able to ethically enroll patients in the trial, the trial would never be completed, and the question would never be answered. It is only by recognizing and accepting that they don’t know the answer that it becomes ethical for disagreeing doctors to enroll their patients in the trial and make progress. Not only has much progress already been made this way, but time and time again, what “thoughtful and reasonable doctors” thought was the best treatment has been shown to do more harm than good when properly tested.

Doctors should be accurate with their patients, and that often means being humble about the community's state of knowledge and their own. They should recognize the limits of their personal assessments based on experience. They should tell patients when there is significant reasonable disagreement, and how confident they are of their advice and why. They should fairly often be saying things like “probably”, “possibly” “we really don’t know”, “our best guess at the moment is”, etc., and patients shouldn't get upset with their doctors when they honestly just don't know.

Cardiologists #1&2 should both tell Duncan that they really don't know what should be done about his remaining 80-90% artery narrowing. They should be free to tell Duncan that each has a hunch about what course of action would be best, but that that's all that they have: a hunch. And this is how the rest of us should behave when faced with the reasonable disagreement of our epistemic peers. Admitting that there is a problem with our belief- a problem big enough to justify suspending a previously held belief-  is the first step towards making sure that our beliefs and the strengths with which we hold them accurately map onto reality.

Notice that Cardiologists #1 & #2 should suspend their belief whether or not a randomized trial addressing the question exists, for the existence of the trial in no way affects their inadequate justification. They should suspend belief before the trial exists, while it is being planned, and until the results are published and shown to warrant one approach over the other.

Next time, I'll be looking at a few objections to the approach to peer disagreement that I have been advocating. Have you got any? Do you disagree?

Wednesday, May 13, 2015

On Disagreement. Part 3


So far in this series, I’ve considered two straightforward instances of disagreement and argued that in each instance, the rational thing to do because of the disagreement is suspend belief (see here and here). Today, I’d like to summarize what I think are the circumstances where disagreement requires suspension of belief.

Quite simply, one should suspend belief whenever, as far as one can know (from an epistemic perspective), the probability that the belief is true is roughly equal to the probability that it is false.

Not all disagreement presents such a situation. For example, Dr. Rik Willems is an expert in the treatment of slow heart rhythm disorders with cardiac pacemakers. If a first-year medical student on her first clinical cardiology rotation thinks that a patient should have a pacemaker implanted, and Dr. Willems disagrees, the probability that Dr. Willems is right is considerably greater than the probability that the medical student is. After all, medical students are supposed to get their plans for patients vetted by attending physicians, not the other way around!

Dr. Willems and the medical student are not epistemic peers. That is, they are not in equally good positions to make judgments upon pacemaker therapy. This is not to say that just because Dr. Willems is in a superior position to make such judgments, that his opinion must be right. The rational thing for him and the student to do is explain to each other the reasons for their opinions. Maybe Dr. Willems has contracted viral encephalitis and evidence of his cognitive dysfunction will be disclosed in the conversation. More likely, however, the medical student has missed an important detail of the patient’s situation, or misinterpreted the available evidence addressing pacing in that situation. This conversation comprises a process known as “full disclosure”; it represents the best possible attempt for disagreeing parties to consider and share the reasons for their own belief and the reasons for the opposing belief. In many such instances, the reasons on one side of the disagreement will really be better and the disagreement will be resolved. We can all, medical students included, learn a great deal this way, even though not all disagreements end so educationally and amicably.

The disagreeing clocks left little to no room for consideration of which time reading was more likely to be correct. Electronic quartz clocks these days are all remarkably accurate, so these two machines are “epistemic peers”. Maybe one had suffered a power loss that the other had not. Maybe somebody spilled a Coke into the one on the night table and caused a malfunction. Or maybe steam and humidity from the adjacent shower caused a malfunction in the bathroom clock. Since the clocks can’t speak and arrive at full disclosure, it seems quite clear that the weight that one must put on the reading of each clock is about equal, and so one must suspend belief about what the time actually is.

The disagreement about the dress also leaves little to no room for consideration of which opinion is more likely to be correct. If just two individuals disagreed, they’d have at least a few things to discuss. Is one looking at the monitor from a particular angle, or in a room with a particular reflection that is affecting her perception? Is one color blind? Is one deceiving the other? But since the disagreement occurred on a global scale, all of these possibilities even out among the two disagreeing camps. Upon becoming aware of the scale of the disagreement, one really is left with no good reason to think that one perception is more likely to be correct than the other, and the rational thing to do is suspend belief. Since the weight of one perception is, as far as anyone can tell, equal to the weight of the other, the circumstances are not unlike considering a coin flip, and this is true even when both parties are disagreeing on the very private evidence of perception.

Why can’t the parties agree to disagree? For the simple reason that both parties have, in the genuine opposition of the other, a good reason to believe that their own perception is, as far as either can tell, the wrong one. Had the opposing belief resided in your own mind – a situation people sometimes find themselves in when they are torn between 2 equally strong but opposing beliefs – you’d be perfectly agnostic. The fact that the opposing belief resides in another mind is, as far as either can tell, arbitrary, and therefore not sufficient to render one belief more likely to be true.

So there we have it.  If epistemic peers disagree after full disclosure, and there remains no good reason independent of the disagreement itself to consider one belief more likely to be correct than the other, the rational thing to do is to suspend belief and try to find other information that will settle the question. If further deciding information is unavailable, either in principle or in practice, then the question will have to just remain open, and cognizers will just have to remain agnostic, at least until such new reasons are available. 

If you think about that for a moment, you should realize that if you accept it, you're going to have to suspend belief about a whole lot of things. This approach to disagreement leads to a significant amount of skepticism, though not, at least as far as I can see, the kind of sweeping philosophical skepticism that is intellectually crippling. We can still believe, for example, that a computer screen is in front of us, that Kennedy was assassinated in the sixties, that OJ was probably guilty (even if that belief isn't beyond all reasonable doubt) and that the gene is the unit of inheritance. But what should minimum wage be? What should be done about income inequality, anthropogenic global warming, and ISIS? Is Allah or Jesus God? These kinds of questions would seem to require the humble approach of agnosticism, and further argumentation, experimentation, and evidence. Sometimes, we are forced to act despite being agnostic, but notice that there's nothing wrong with taking a "best guess" when that's all that is available.

In part 4, I’ll apply this reasoning to a case of disagreement in the Cardiology community and explain how it is being addressed. Chime in now with your own disagreement and you just might find me addressing it in part 5, when I will consider some criticisms of approaching disagreement in the logical fashion I have been describing.

Monday, May 11, 2015

On Disagreement. Part 2


Here's a pretty dull picture of a dress with gold and white stripes, right? As you probably know, this past February, a tumblr user posted this photo and it went viral. Why? Because of disagreement.

If, like me, you saw gold/white stripes, then you were rational to believe that the stripes actually are gold/white. But what are you rational to believe when you realize that a huge population of people disagree with you? While looking at the same picture, they see black and blue stripes. How does your awareness of that disagreement influence the rationality of your original belief?

It turns out that the dress in the photo has been identified, so evidence that will settle the question exists. However, while one is aware of the genuine disagreement and before one is aware of what that definitive evidence shows, we can and should ask the following questions:

- Are those who see gold/white stripes rational in continuing to believe that the stripes are gold/white?

- Are those who see black/blue stripes rational in continuing to believe that the stripes are black/blue?

- Or should both camps suspend belief and conclude that there is something fishy about this situation - something that's preventing either group from rationally forming a belief about the actual colors of the stripes?

It seems to me that just as the disagreeing clocks in my previous post prevent rational belief regarding the actual time, so does the disagreement that captivated the world-wide-web prevent rational belief regarding the actual colors of the stripes on the dress. Until further evidence is available to settle the question, anybody who insists that the dress stripes actually are as they appear to them in the face of that disagreement is just special pleading.

The definition of arrogance is displaying a sense of superiority, self-importance, or entitlement. Without a reason for one group to think that their perception of the dress colors is more likely to be correct than the other, any member of each group who is aware of the genuine disagreement that exists, yet who insists that the colors actually are as they appear, is being arrogant. The humble thing to do here is the epistemically right thing to do, and that is to recognize that one simply can't rationally believe that the dress colors are as they appear. Not, at least, until further evidence settles the issue. The rational thing to do here is to remain agnostic on the question, despite the deliverance of your senses.

Let me explain. Assent to a belief is only rational when it is more likely that the belief is true than false. Since there is no reason to think that one group is more likely than the other to have true beliefs about the dress stripes, the principle of insufficient reason (also known as the principle of indifference) suggests that the probability that either group is correct is no better than 0.5 (after all, both groups could be wrong). Accordingly, the genuine disagreement in this case prevents rational belief. Again, the rational thing to do is to remain agnostic about the dress. One could humbly say that one's best guess is that the colors are as they appear to them, but one would not be rational to say that they believe that the dress colors actually are as they appear.

What if someone perceiving the colors as white/gold were to think to themselves something like this: "Maybe those people seeing black/blue stripes have something wrong with their visual systems? Maybe they are falling prey to an illusion? Accordingly, I can rationally continue to believe that I'm right and they are wrong." Would this kind of argumentation provide a good reason for rationally maintaining the belief that the dress colors actually are white/gold?

Well, if those seeing black/blue stripes are falling prey to a visual illusion, then those seeing white/gold stripes are rational to continue to hold their belief that the stripes are white/gold, but the disagreement calls that very conditional into question! Assuming that the other group is the one falling prey to a visual illusion is a classic case of begging the question (also known as circular reasoning). To avoid this fallacy, one would have to not be assuming that the other group is likely to be wrong; that is, one would need to have reason(s) independent of the disagreement itself to believe that the other group is likely to be wrong.

I've now considered two instances of disagreement: (a) quartz clocks displaying different times, and (b) two large groups of people disagreeing about the colors of a dress in a photo. In both instances, there was no good reason to think that one clock, or one group, was more likely to be correct than the other, and in both instances, assent to belief was irrational, or so I have argued.

Next time, I'll try to summarize what I think are the logical principles involved in considering how disagreement should affect the rationality of one's belief(s). This is the time to chime in if you think that I've made some mistake in my reasoning so far. This is your time to disagree.

By the way, here's a picture of the actual dress in the photo:

And here's a link to a great discussion of the explanation for this disagreement by Canadian experimental psychologist and cognitive scientist, Dr. Stephen Pinker.

Sunday, May 3, 2015

On Disagreement. Part 1



Suppose you’ve just completed an over 20-hour series of flights to an exotic location. You’re exhausted. When you get to your hotel, you close the curtains tight, curl up in a cool, crisply made bed, and finally fall into a delicious sleep. After what seems like an eternity, when you stir again, you crack open one eye and see that the bedside alarm clock reads 07:00 am. Refreshed and remembering that you have a busy day ahead, you pop out of bed, planning your day.

When you reach the bathroom to take a shower though, something strange catches your eye. The clock on the bathroom wall says 09:45 am.

Hmmm.

Up until you walked into the bathroom, it was quite reasonable (ie. rational) for you to believe that it was 7 am. What’s it now rational for you to believe upon seeing the virtually simultaneous reading of 09:45 am on the second clock?

I don’t give a flying fruit what the actual time is, and I don’t mean for a second to suggest that if the scenario I just posed were to actually happen in real life, one ought to go back to bed and deliberate at length about the question I asked. There’s no doubt that you can just pick up the phone and ask the front desk what time it is, or check your smart phone that synchronizes automatically over WiFi. 

Boring. 

I care about what it’s rational to believe before sorting the problem out. Why? Because many disagreements that we regularly face are not so easily resolved and it is precisely those that are the most interesting and challenging disagreements to handle. For example:  “I thought we should pay down our mortgage, but my sister said it’s better to save for retirement.” “I really think I should marry him but my parents think otherwise.” “My cardiologist thinks I should put off having my valve replacement surgery, but the cardiac surgeon said that the operation is called for now.” I suggest that there might be something for us to learn from simple cases of disagreement that we might - no, we should - apply to the more complicated and important disagreements with which life is brimming.

So please stop and consider for a moment what impact the disagreement between the two clocks has on what one can rationally believe. Remember, you were rational to believe that it was 7 am right up until you saw the second clock. Should you (a) continue to believe that it’s 7 am, concluding that the second clock must be wrong? Should you (b) believe that it’s 9:45 and conclude that the first clock must be wrong? Should you (c) think that it’s probably half way between the two times (8:22:30)?  Should you (d) believe that you have no idea at all what time it is? Should you (e) believe that it’s probably morning?

You probably felt compelled to seek out further information as you contemplated the situation I posed, and that should be a good indication that (a) and (b) are not reasonable. After all, there’s no reason to think that one clock is more likely to be correct than the other. Perhaps it’s reasonable to believe that it’s morning, but notice that had the second clock read 7 pm, you’d be completely lost and you’d have to conclude with (d).

It seems obvious to me – a fact of rationality itself – that the awareness of the disagreement of the second clock must dramatically reduced the confidence that one rationally had in initially believing that it was 7 am. 

In Part 2, we’ll explore some more complicated disagreements, but this is an important time to chime in if you think that my conclusion is mistaken. I’ll repeat it one more time: the instant you become aware of the significant and mutually exclusive disagreement of the second clock, you have a very good reason to drop your belief that it’s 7 am. You suddenly have a very good reason to doubt that you can tell anything reasonable about the time, except that maybe, it’s morning, and that'll just have to do until you gather information that will settle the question. I think that if you agree with me here, you’ll have to admit that disagreement ought to have a much greater impact upon the confidence we have in our beliefs than it seems to have. Join me in the rest of this series on disagreement to see if I’m right, or if you disagree!

Saturday, February 28, 2015

Should We Accept Revelation?


My friend and most excellent high school teacher, Johnston Smith, will not stop insisting that "there is more in heaven and Earth than is dreamt of in my philosophy," despite my protests (here and here). Johnston taught me debating and more recently has been a keen defender of his Catholic beliefs in a very enjoyable and charitable exchange of ideas with me. Today, I'm going to present a different argument in the hope of halting his use of this popular canard. But first, let me clarify what my "philosophy" actually is.

I’m an evidentialist: I believe that that which is rational to believe is that which is justified by reason and evidence*. Justifications can't be infinite; they are ultimately founded upon that which we can perceive and remember. I don’t think that all of our perceptions and memories are reliable, but considering all of the exceptions is neither in scope nor required. For today, all that one must understand about my epistemology is that perception and memory can be directly justified (aka properly basic).

Here's a simple and familiar example of what I mean. I look at the kitchen table and see two boxes on it. The belief that there are two boxes on the table is directly justified by my seeing them. When I say that that belief is directly justified, I mean that I don't ordinarily need any additional evidence to know that there are a couple of boxes on the table; my perception is enough. I bet that you, like pretty much all humans, form rational beliefs like this all the time.

Now, I have to be open to information that could change my mind about that belief. Perhaps there is one box on the table and a mirror that makes it look like there is a second one, too. Upon learning that that that was the case, I’d have a defeater for my belief that would force me to change my mind. Quite simply, reason dictates that it's not possible to rationally hold both beliefs: (1) that there are two boxes on the table and (2) that there is one box plus a mirror that merely makes it look like there are two. But barring any such tricks or problems with my vision, I can know that there are two boxes on the table without needing any additional evidence or information. That belief can then serve as evidence for subsequent beliefs. If my son tells me that he was alone playing with some toy boxes in the kitchen a few seconds ago**, I can conclude with the rational belief that he probably left them there, and so on. Our perceptions can be directly justified and serve as foundations for rationally held beliefs.

But Johnston thinks that there is more to be perceived than that of which we normally think when we consider the familiar human senses of vision, hearing, vibration, temperature, etc. Johnston thinks that we can also include alongside those perceptions a way to perceive God and/or the Holy Spirit commonly known as revelation. In his words:

"All epistemology is based upon faith: the skeptic has faith in reason and perception; the theist has faith in revelation and reason . . . While the skeptic believes that perception is the only reliable source of data for his reason to consider, the theist does not say that it is only the senses which supply reliable data. Paul of Tarsus writes, "Eye has not seen nor has ear heard the wonders God has in store for those who love him." As I have suggested before, it is just as "rational" to accept sources of data other than the senses as it is not to accept them . . . There are many realms of knowledge, each with its own standard of proof. Science uses observation & the empirical guided by reason; law uses evidence (including witnesses) guided by reason; history uses the record guided by reason; theology uses revelation guided by reason."

I think that Johnston has work to do to make the case that revelation can be directly justified the way that other sense perception can, but I haven’t made the case that sensory perception can be directly justified today, and I’m not going to get into it at this moment (though this paper by Christian philosopher, James Sennett, is the bomb), so it would be unfair of me to demand that Johnston do so now. Rather, for the sake of argument, I’m going to accept that he may be right about that. It’s equally fine if Johnston thinks that accepting revelation is merely a matter of faith, as he has suggested. What’s important is that we both accept reason, as he admitted above. Remember that reason provides us with an epistemic duty to reject a belief when a defeater for that belief is identified. I intend to present a defeater based on reason for Johnston’s claim that revelation can lead to directly justified beliefs.

Let’s return to those boxes on the kitchen table, a belief that was, for me, directly justified by visual perception. Now imagine that you look at the table and see three boxes, instead. What if three others say that they see four, five, and six boxes on the table, and two others see none at all? There are no smoke and mirrors operative, and all seven of us are healthy, neurologically normal human beings.

Breathe this thought experiment deeply in and really imagine that you look and see three boxes. What are you going to conclude? Do you believe what your eyes are telling you, knowing what I and the others are or aren’t seeing? Or do you have enough doubt about your initial belief as to lead you to reject it and concede that something fishy is going on, something fishy enough to require you to refrain from making any rational claims about what’s actually on the table?

It seems rather obvious to me that it would be the height of arrogance to insist, against the knowledge of what one’s epistemic peers are perceiving differently, that what one, personally, is perceiving, forms a rational belief.

What if the person seeing six boxes says that she prefers a world where there are six boxes on the table because she needs them in which to wrap six presents. Do preferences or conveniences make beliefs rational?

The humble thing to do in this situation is the rational thing to do, and that is to recognize that something about this situation is undermining the ability to perceive what’s on the table. None of us seven people can, without additional evidence or reason, rationally form any beliefs about how many boxes are on the table let alone whether any boxes are on the table at all.

Notice that I’m not saying that your perception of three boxes is false. (I'm not making a de facto objection). You may be the right one among us. The two who see none may be right. Or maybe none of us is right. I’m saying that forming a rational belief about what's on the table based upon our vision isn't even possible. (I'm making a de jure objection).

And so it is with revelation. People born in India or Iran are Johnston’s epistemic peers, yet they almost exclusively have and/or believe in revelations about multiple gods in the former country and a different but no less mutually exclusive god in the latter. If you were an ancient Greek, you might well have had a theophany of Zeus or Apollo. And what about me and countless others like me? I was once a Christian who tried, genuinely tried, yet my "relationship with god was always a one-way street. I’ve never had any inkling of any revelation of any god, whatsoever^.



What must Johnston conclude about this state of affairs? Is it epistemically acceptable for him to claim that belief founded upon revelation is rational? Can Johnston successfully argue that he and certain other Christians are somehow epistemically superior to the rest of humanity without resorting to special pleading?

One thing that Johnston can't do is ignore this defeater by claiming that it's based upon a disanalogy in that the truth regarding how many (if any) boxes are on a table is normally amenable to an examination of empirical evidence while the truth regarding revelation and God are not. Why not? First, there is no disanalogy: the reason that the 7 observers are in trouble is that there is no way to answer their question with empirical evidence because none is available to them. They're stuck that way, just as are those who claim vastly different and mutually exclusive revelations. Second, that objection is about evidence, but the defeater posed by the analogy is entirely about reason.

Isn't it time for Johnston to admit that his belief in revelation, and Christian revelation in particular, is irrational? Claiming that these beliefs are based on faith doesn't somehow solve that problem; it's synonymous with it. Alternatively, he must defeat the rationality defeater I have presented, or provide other evidence or argumentation indicating why revelation is rational and true.

But I don't see how he can.


*Please notice that I'm not just talking about scientific evidence, here. The kind of evidence a historian might accept, or a judge in a court of law, are both included in evidentialism, where the strength of the belief is apportioned to the strength of the relevant evidence. Accordingly, my epistemology does not amount to scientism.

**I'm actually quite sympathetic to the idea that testimony can also be directly justified in certain situations, just as sense perception can.

^A reminder is required here: I don't conclude on that basis that revelation isn't rational or that god probably doesn't exist.

Tuesday, July 8, 2014

Are there more things in Heaven and Earth, Horatio?


Is the supernatural beyond reason and evidence, safely protected from human investigation? And is God, a supernatural entity, therefore, off limits for "empiric and rationalist" considerations?

I'm going to begin by answering the first question as it might relate to the alleged healing powers of acupuncture. Then, I'll draw those threads together and show how the same considerations actually apply to God, too.

The idea behind acupuncture is that the pathophysiology of the disorder being treated includes an unhealthy bodily flow of Qi ('Chi') - said to be a type of living or vital energy.   The placement of subcutaneous needles in specific locations is supposed to restore the normal flow of Qi, helping to heal the disorder in a biologically active way.

Qi is an ancient and intuitively appealing concept. There must be some important difference between living organisms and dead ones (or inanimate objects), after all, so why wouldn't it be a mysterious quantity and why not call it a type of energy since, in other circumstances, energy is invisible except for the things it makes happen? But we now know better. Vitalism has been thoroughly and completely discredited by science. Life is driven by the usual types of energy that are all well described - the same kinds of energy that drive all chemical reactions, only in the case of life, those reactions maintain homeostasis for at least enough time for a given species to reproduce. No additional or special type of energy is required to explain life, and no reputable or serious biologist thinks otherwise, anymore.

Nevertheless, millions of people still believe in Qi and acupuncture. "Science just hasn't discovered a way to detect or measure Qi, yet," they tell us. But that is a big, smelly, red herring.

It actually doesn't matter whether we can detect or measure Qi itself, for we are told that Qi and its manipulation has effects in this world that are measurable. We are told that acupuncture has the measurable effect of healing people. Accordingly, we can conduct randomized controlled trials where people are randomly assigned to real or sham acupuncture treatments (where the needles don't actually penetrate the skin and are placed at random locations by non-acupuncture practitioners). If acupuncture works, we can make a prediction: the people getting real acupuncture treatments should improve more quickly and/or more thoroughly than the people getting sham acupuncture treatments. It doesn't matter one iota that we can't measure Qi or it's flow patterns directly. All that matters is that we can predict and measure the alleged effects of Qi here and now, in the natural world.

Well, studies of this kind have been done over and over and I'm afraid that it doesn't look good for acupuncture. The outcomes in the 2 groups are largely indistinguishable. One clearly shouldn't think of acupuncture as doing anything biologically active beyond the power of suggestion. While this doesn't disprove the existence of Qi, it certainly proves that acupuncture as a way of beneficially manipulating Qi is useless. Maybe there are other ways of doing so, but until those are discovered, the idea of Qi adds nothing to our understanding of illness and health, and there's absolutely no reason to believe that it does exist.

It's possible that Qi has nothing to do with anything in the natural world. Perhaps it's purely supernatural. But, if it is, then it's of no consequence here, and should be of no concern to anybody; the whole idea is without meaning. If it does have consequences in the natural world, then those consequences should be measurable or detectable.

And so it is with all supernatural claims, including the existence of the Christian God. Maybe we can't directly detect the Christian God, but we can reason from God's alleged qualities to predictions about the way the world ought to be, and then look for evidence of whether the world is as we'd expect, or not. We would expect a universe where the Christian God exists and is omnipotent, morally perfect, perfectly loving, desirous of a personal relationship with us, etc. to look quite different from a world where no such God exists. For example, if the Christian God exists, we wouldn't expect any gratuitous natural suffering, yet we see a world that appears to be overflowing with it. We wouldn't expect there to be billions of non-believers clustered within borders explained by natural and haphazard factors like politics and conquest. If evidence gathered in the world is better explained by the non-existence of God, then the existence of God should seem much less likely to us. At the very least, it should cause us to become very skeptical of the alleged qualities that those failed predictions are based upon. There are very many predictions made by Christian theism that can be tested here on Earth, and I'm afraid that, like acupuncture, the situation doesn't look good at all.

Perhaps the Christian God doesn't exist, but a different God who lives entirely in a theoretical supernatural world does. Perhaps we know nothing about the qualities and capabilities of such a God and he never interferes in this world in any way. This pretty much describes what deists believe. This type of God really is beyond investigation by reason and evidence, but what a useless belief! A universe where this God exists is no different in any way than one where such a God doesn't exist.  It might as well not exist at all.

This reminds me of the parable of the invisible gardener, by John Wisdom:
"Two people return to their long neglected garden and find, among the weeds, that a few of the old plants are surprisingly vigorous. One says to the other, 'It must be that a gardener has been coming and doing something about these weeds.' The other disagrees and an argument ensues. They pitch their tents and set a watch. No gardener is ever seen. The believer wonders if there is an invisible gardener, so they patrol with bloodhounds but the bloodhounds never give a cry. Yet the believer remains unconvinced, and insists that the gardener is invisible, has no scent and gives no sound. The skeptic doesn't agree, and asks how a so-called invisible, intangible, elusive gardener differs from an imaginary gardener, or even no gardener at all."
I'm afraid that existential questions seem to always and only boil down to reason and evidence. When there is reason and evidence, a conversation can be had about their merits and meanings.  When reason and evidence are unavailable in either principle or practice, we have a meaningless claim that terminates the conversation.

Believing despite insufficient reason and evidence - believing on faith - is propped up as being incredibly valuable, but why? What's so great about faith? It seems to me that in every domain of human discourse other than religion, believing on faith is rightly frowned upon. Would you cross a street on faith without looking to see if a bus is coming? Look at what believing in the existence of God on faith gets us: thousands of Gods and traditions most of which are mutually exclusive, and balkanized  doxastic communities with a horrible and ongoing history of intolerance, discrimination, and slaughter in the name of those beliefs. This is supposed to be the Zenith of human understanding and the path to the most important 'truths' in the universe? What could the word 'truth' possibly mean in that sentence without reason and evidence?

There may well be plenty more in Heaven and Earth than is dreamt of by reason and evidence, but without them, I'm afraid it's all just "Words, words, words." (Shakespeare - Hamlet).

Tuesday, July 1, 2014

There are more things in Heaven and Earth, Horatio...

Keanu Reeves as the Prince of Denmark during Winnipeg's MTC production in 1995. Check out that passion.

I recently had a nice discussion with some friends about the challenges of identifying as Catholic given numerous problems that flow from standard Catholic doctrine combined with the hypocrisy and scandal within the organization. Along the way, I received this comment:

You seem to operate out of a rationalist-empiricist framework. And there is of course nothing wrong with that when one is considering matters subject to the analytical benefits of this sort of world-view. But I suggest that there is plenty of human endeavor and human interest in matters not well suited to this sort of analysis. I mean, would you really mock John Keats because figures on an urn are not really “frozen” there? Or would your smirk at Bob Dylan because the times don’t really change? Would one do a cost/benefit analysis of caring for one’s child?

… since God and belief in God are human preoccupations based upon faith, they are not subject to rationalist/empiricist argumentation. You may call it “irrational” but I might propose calling it “hyper-rational.”

The idea here is that there are matters that are not subject to reason and evidence and that the question of God's existence is one of those matters.

If true, then one's belief in God simply cannot be questioned or challenged. This is a big & bold claim that serves to insulate one's belief from criticism.

Sweet.

But imagine the defendant in a murder trial asserting that there are matters not subject to reason and evidence and that the question of his guilt is one of those matters. The incoming tide of further accusations he would face from judge, jury, and the wider court of public opinion would surely include arrogance, stupidity, & foolishness. I mean, it'd be a great move if one could pull it, but can one really ever pull it? What exactly are these matters that are not subject to reason and evidence, and is the question of God's existence really one of them?

Consider a couple of questions. Do reason and evidence explain why we fall in love with whom we do? Can reason and evidence explain why I don't like anchovies? Emotional matters and personal tastes cannot (yet!) be satisfactorily explained by reason and evidence, but it would be absurd to suggest that questions about the existence or nonexistence of certain entities (or the guilt or innocence of those charged) can be answered using emotions and personal preferences, wouldn't it? Would you believe in the existence of Bigfoot on the basis of emotions or personal preferences?

What I think is lurking behind the comments I quoted is that God is supernatural. That's relevant because it is widely believed that the supernatural is a matter that, perhaps forever (but at least for the moment), lies beyond (and is therefore not subject to) reason and evidence. The supernatural is "hyper-rational". Not surprisingly, I have heard similar claims made by people defending alternative medical treatments. Here's what a friend of mine had to say about the paucity of high quality evidence supporting acupuncture and the wealth of evidence indicating that it's nothing more than the power of suggestion :
" in the chaos of nature, there is more that we don't understand than do, and it is arrogant to think that controlled experiments in hermetically sealed labs can in any way replicate the chaos and uncertainty that occurs in nature."
Ahh, yes. I've heard that first part before, somewhere ...
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy." - Shakespeare, Hamlet
My advice to you? Beware when you encounter this quote. The person using it is probably feeling that their belief is under some significant pressure and the best way to relieve it is to suggest that it's beyond Earth, Heaven, and even reason itself. You can be sure that if reason and evidence were available to establish their belief, you'd be hearing all about it (this is known as apologetics), but when the belief persists in the face of insufficient reason and evidence (the definition of faith?), then this old canard may make an appearance in the conversation. And notice that it is a conversation stopper:

"I'm talking about something that's beyond you, beyond everything, and neither you nor anybody can touch it."

This attitude - that one is in possession of information that transcends reason and empirical evidence - can be harmless, but it can also lead people to accept ineffective treatments when effective ones exist (see here and here), and it lies behind religiously motivated discrimination and slaughter of infidels and perpetrators of imaginary crimes- the most horrible and tragic aspects of faith-based beliefs.

So this important question still stands: Is the supernatural off-limits for reason and evidence? Chime in and let me know what you think. I'll be responding in a few days.

Tuesday, March 4, 2014

How to Want to Change Your Mind


The incomparable Julia Galef with what I think is some great epistemic advice.

What big ideas have you changed your mind about?

Monday, February 17, 2014

Is Atheism Irrational?


A couple of weeks ago, Dr. Gary Gutting, a Catholic philosophy professor at University of Notre Dame (which is also Catholic) published in the NY Times Opinion Pages an edited email interview he had with Dr. Alvin Plantinga, who is also a Christian philosophy professor from Notre Dame. This, the first of a number of interviews by Gutting about religion, was provocatively titled: “Is Atheism Irrational?

In the next few posts, I’ll be responding to what I consider to be some of the amazing claims that seem to be made in this interview, but why should anybody care about what Plantinga has to say in the first place?


Atheists are frequently criticized for failing to address sophisticated arguments, so Plantinga’s opinions are important because he arguably represents the pinnacle of sophisticated – philosophically sophisticated - Christian apologetics.

So please have at the interview, and ask yourself if you agree that atheism (as opposed to agnosticism or even theism) is an irrational belief. I’ll share my thoughts about Plantinga’s views this week.