Friday, January 31, 2014

Probability, likelihood, and the evidence for the existence of oh, god, is this going to be about Bayes' Theorem?

I've been thinking about why so many otherwise reasonable people (including my former self) might view the appearance of design in the structure of the world as a manifestation of the handiwork of some deity, when it's pretty clear that scientific theories can (at least in broad strokes) account for the origins of the universe as we know it and the development of life on earth without recourse to any kind of intelligent design. Beliefs come in various degrees of certainty, and normally our assent to belief systems is at least partially guided by looking at the way those systems account for the facts around us. 

We're normally OK at doing this as people. Not great, but OK. Good enough to get around in the world without being eaten by a tiger or falling off of a cliff or something. So, the problem becomes, how is it that people who are able to get around in the world based on a set of beliefs they form as a result of the evidence around them getting this one wrong? In other words, why do people find the idea of intentional design such a satisfying way of interpreting the world around them?

Who knows of a better way to think about this issue than probability theory? 

Nobody? 

Good.


One way of trying to come to terms with the interpretation of evidence in general and the relative weight people put on a given bit of evidence is to fit it within the framework of Bayes' Theorem, which is a method of assessing the relationship between the probability of a hypothesis and its counterhypothesis given a bit of information. Getting a little clarity about how probability claims are logically structured can help us determine whether the intuitive plausibility of the design hypothesis can really support the weight it's used to bear in theological discourse.

In its simplest form, Bayes' Theorem is this: 
P(H|E) = [P(E|H)*P(H)]/P(E).
To translate this into English: this means that the probability (P) of a hypothesis (H) given a bit of evidence (E) is equal to the likelihood (P(E|H)) that we'd see the evidence in question if the hypothesis were true multiplied by the prior probability of the hypothesis (P(H)) and then divided by the prior probability that the state of affairs described by our evidence obtains independently (P(E)).

An example would probably be helpful. Let's say you've tested positive for a disease (let's call it "hot dog fingers") that affects 1 in 100 people. You also know that there is a 80 percent true positive rate for the test (which means that 8 out of every ten people with hot dog fingers are identified by our test) and a 9 percent false positive rate. So how do we use Bayes' Theorem to calculate the probability that you actually have hot dog fingers?

The likelihood of the positive test if you have hot dog fingers (our P(E|H) above) is .80, and the prior overall probability [P(H)] that you have the disease is .01, since it affects 1% of the population. To figure the overall probability that you'd get a positive result [P(E)] you'd need to add the probability of a true positive (.80 X .01) to the probability of a false positive (.9 X .99). So, that gives us the following for P(H|E), which is the probability that your fingers are in fact made of hot dogs:
P(H|E) = (.80)*(.01)/(.80*.01)+(.9 X .99) = .082, or 8.2%
The positive test tells us that you only have an 8.2% chance of actually having hot dog fingers. Counterintuitive until you think about it like this: the actual probability that your positive test means that you have the disease is really just the proportion of true positives out of the total number of positives. So it's conceivable that even a test with a high degree of accuracy would yield more false positives than true positives for a rare enough disease. In our example, if we had a sample of 1000 people we tested, we'd expect there to be ten people with the disease if our sample is representative. Our test would identify 8 of them based on the 80% true positive rate. But our 9 percent false positive rate would tell us 90 people had the disease. 8 true positives in 98 total positives is 8.2%.

OK. So, what does any of this have to do with the argument from design?

While it isn't always structured explicitly in terms of probability, the design argument gains its plausibility because of an implicit appeal to probability considerations.  It asks us to look at the apparent design in the world and compare the hypothesis that the design is a real feature of the world to the hypothesis that the order we see is in fact the result of an accumulation of natural causes. 

But if we use Bayes' Theorem to try to structure that claim more carefully, we can see there are really only three places where we can increase the overall probability of the design hypothesis relative to its naturalistic alternative--it's by decreasing the independent probability of the evidence (P(E)), increasing the value of the prior probability (P(H)), or increasing the value of the likelihood (P(E|H)).

Decreasing the independent probability of the evidence seems like a good place to start if you're a theist attempting to demonstrate the relative implausibility of a natural origin of things. If you have a state of affairs that is objectively unlikely to obtain, but that has higher likelihood if your hypothesis turns out to be true, you may have something. So if a world like ours is extremely improbable prima facie but very likely according to the design hypothesis this count in the hypothesis's favor. This is the source of the intuitive plausibility of the Fine Tuning Argument (which I've discussed previously). But there are problems with this approach. One is that we simply don't know much about the objective probability of the world as it is. We may have our intuitions about how unlikely a world like ours may be, but real evidence that the particular arrangement of things is less probable than some other arrangement is absent. But even if we were to determine that the world as it is is objectively improbable, that low probability will also increase the overall probability of competing theories that can account for the evidence as well. So what helps the design hypothesis helps the naturalistic one as well, other things being equal.

Which means the difference has to lie in either the prior probability or the likelihood of the competing hypotheses. Adjusting the value of the prior probability is a nonstarter, though. Unlike our example above, there is simply no base rate to draw from to say what the prior probability is that God exists. Philosophers may talk about possible worlds and alternate universes, but we have no way of estimating how many of those possible worlds would have gods vs. how many would not. Not that certain theologians haven't attempted to demonstrate a god that must exist in any possible world (for example, by use of the Modal Ontological Argument, nicely critiqued on the Counter Apologist blog). But these sorts of a priori attempts all fall prey to the same problems. They inevitably turn out to be circular once you unpack the definitions of god provided in the premises. 

So the only way someone can bolster the prior probability of the design hypothesis is the provide some other criterion for favoring it over its rivals. This is sometimes done by extra-evidential considerations like the so called "theoretical virtues." Parsimony (Occam's Razor), elegance, explanatory scope, etc. are all claimed as theoretical virtues, but their appeal is merely subjective (like's Quine's self-described "taste for desert landscapes" in metaphysics) unless we can tie the virtue in question to the probability of the hypothesis being true. In other words, on its own parsimony doesn't tell us anything about the viability of a hypothesis. Appealing to parsimony may be a reasonable heuristic if you're working within a domain that has already demonstrated that simpler explanations have frequently turned out to be true, but again, we simply don't have any evidence on this regard either way. 

So this leaves us with the likelihood we'd see a world such as we see if either of the hypotheses are true. This is again a notion that is a little difficult to get at, given that we don't have have the ability to actually assign numerical values to the competing hypotheses. But this is the area in which much ink has been spilled on both sides. For my part, I find the arguments that the world is full of much more suffering than we'd have warrant to expect if there were an omniscient, omnibenevolent, omnipotent God pretty compelling. The problem of evil remains to be adequately addressed from the design hypothesis's point of view, but the existence of suffering is no kind of challenge for the naturalist position. "Nature, red in tooth and claw" is to be expected. 

Theist responses to the problem of evil (at least those that want to maintain thinking of God in terms of the 3 Os) tend to fall into two categories: those that invoke human freedom as an attempt to deflect God's responsibility for evil, and those that invoke the limits of human wisdom in understanding the moral structure of the universe. Both of these issues require much more attention than this post can give, but the structure of the problematic aspects of these responses to evil can be sketched.

Human freedom fails as an explanation for the existence of evil in a universe created by a good God. The free will defense argument tends to go like this: in order to have a world where moral good is possible, free moral agents are required. True moral goods on this view require the presence of free will. And free agency means the ability to choose wrongly, if it's to have any value as freedom at all. But assertions that the existence of free will necessarily entails the ability of agents to act evilly can only account for those evils perpetrated by intentional human action. How can human agency be an adequate explanation for the 500 million or so years of animal suffering that preceded the first humans? Isn't God's eye supposed to be on the sparrow, too?

But even more problematic for the free will defense is the fact that God would presumably have been able to select from among all logically possible worlds one in which human agency existed, but in which each agent always chose to do the right thing. One of the big Os attributed to God is omniscience, which would mean that he'd have epistemic access to the consequences of the free actions of his creatures. But it would also mean that he'd have epistemic access to alternate versions of the world in which those free agents took the right action each time, and we'd have to assume that a world with no wrong action is morally superior to one with some wrong action, all other things being equal. There's nothing logically contradictory in this. One attempt to forestall this objection on the part of theologians is to assert that such a world is metaphysically impossible rather than logically impossible, and that we suffer from some mystical defect referred to as "transworld depravity" that causes us to act evilly in any possible world. But this is just an ad hoc addendum to save the free will defense against the problem of evil. It doesn't actually fall out from the concept of freedom in any meaningful sense.

So it appears that God might have been able to create a world with less suffering than the one we see. In response to this some have taken the second approach to the problem of evil, arguing that the evil we see might not be gratuitous after all, and that we're simply not in an epistemic position to understand all of God's designs (an approach known as "Skeptical Theism"). If the suffering we see is necessary to accomplish some greater good, then the evils aren't gratuitous, and God can be viewed as some sort of cosmic utility maximizer doing only what's necessary to create the most possible good. We just lack the ability to see all of God's designs.

But the skeptical theist approach suffers from two major flaws, as far as I can tell. The first is that if we accept the claim that our understanding is too feeble to divine God's designs, then we have no way of telling whether God is good or evil. Though the skeptical theist assumes that God is good and allows only whatever suffering is necessary to maximize good, our limited ability to access God's design means that it's also possible that God is evil, and allows only whatever good is necessary to maximize evil. 

In addition to being unsatisfying on its own, the skeptical theist's approach actually undercuts the plausibility of the design argument. In order for the design argument to get off the ground, there has to be some ability to ascertain the purpose of the designer. One can't reasonably look at the complexity of a system and judge that it must have been designed unless one has a sense of what the designer had in mind to know whether those goals have been accomplished. But the skeptical theist approach limits what we can say about God's intentions with any degree of specificity.

So despite it's initial intuitive appeal, there appear to be challenges to showing the overall probability of the design hypothesis relative to its competitors within the probability calculation provided by Bayes' Theorem. Attempts to show that the independent probability of the world as it is are hand-wavy and unempirical, and also serve to increase the probability of competing hypotheses. Attempts to increase the prior probability of the design hypothesis fail for lack of objective evidential support. Likelihood considerations, particularly the problem of evil, make it prima facie unlikely that a 3 O God would create a world that contains the amount of suffering we see, and there don't appear to be good ways for the theologian to go about handling that without appeals to ad hoc addenda or epistemic limitations that undercut the design argument altogether.

No comments:

Post a Comment