Название: This is Philosophy of Science
Автор: Franz-Peter Griesmaier
Издательство: John Wiley & Sons Limited
Жанр: Математика
isbn: 9781119758006
isbn:
1.1.1 Conclusive Reasons
The first type of epistemic reasons are called conclusive reasons. A reason (R) is conclusive for some belief (B) if and only if the belief B must be true if R is true. And this condition holds even if there is not just one reason for B, but also in cases in which B rests on many reasons. In more general terms, if all the reasons for a belief are true, and if they are conclusive reasons, then their target belief must be true. Conclusive reasons guarantee true beliefs, which is strongest basis one can have for believing something. So, how can we understand this definition?
A good example for conclusive reasons are the premises of a deductively valid argument. In such an argument, if all the premises are true, then the conclusion must be true as well. Here’s a simple example:
Premise 1: | All humans are mortal. |
Premise 2: | Stephen Hawking is human. |
Conclusion: Thus, Stephen Hawking is mortal.
Clearly, if premises 1 and 2 are both true, then the conclusion is guaranteed to be true. Thus, the two premises together are conclusive reasons for believing that Hawking is mortal. But this sort of reasoning is not often helpful for advancing our scientific understanding of the world. Let’s see why.
Notice that in a deductive argument, what’s really happening is that information that is already contained implicitly in the premises, is made explicit in the conclusion. In other words, the conclusion does not reveal any new information. It restates the information that’s already contained in the premises. That’s why such inferences are safe: truth in – truth out.
Deductive reasoning (i.e., reasoning that proceeds by providing conclusive reasons) is mostly confined to two major disciplines: mathematics and logic. Yes, sometimes we use deductive reasoning in the empirical sciences, such as in cases in which we deduce observational consequences from a theory in order to test it (which can include refuting the theory):
Premise 1 (Theory 1) | All birds can fly. |
Premise 2 (Theory 2) | Penguins are birds. |
Premise 3 (Deduced Consequence) | Penguins can fly. |
Premise 4 (Observation): | Penguins can’t fly. |
Conclusion: | Not all birds can fly. |
However, a lot of scientific reasoning is nondeductive. Why? Because typically, in scientific reasoning, we want to infer something about the world at large on the basis of a limited number of observations. Such inferences are inherently risky because their conclusions convey information that goes beyond the information contained in the descriptions of the actual, limited observations that have been made.
For example, if I infer, on the basis of having observed the eating habits of 20 koalas, that all koalas eat eucalyptus leaves, I make such a risky inference. I assume, among other things, that the koalas I observed are typical of their species. This assumption could easily be wrong, as I might have come across a peculiar band of koalas that happen to consume eucalyptus. That such inferences are risky, however, doesn’t show that they are altogether unreasonable. The conditions under which they are reasonable are somewhat difficult to pin down, and we will tackle this challenge in the next section.
Now, given that reasoning nondeductively is risky, and that the conditions of its reasonableness are somewhat elusive, one might think that science should aim at just using deductive inferences, precisely because they are safe – even certain. But that would be a mistake. Remember: They are safe because in an important sense, they are uninformative. Since there is no new information in a deductive conclusion that was not already implicitly contained in the premises, deductive inferences won’t allow you to gain more information about the world by reasoning from your evidence. To accomplish this, we need to go beyond an obsession with certainty, which is provided by conclusive reasons and reasoning, and enlarge our toolbox. The tools we need, especially for the empirical sciences, are various forms of defeasible reasoning, and thus defeasible reasons.
1.1.2 Defeasible Reasons
The second, and much more common, type of epistemic reasons are called defeasible reasons. They are also sometimes called probable, or prima facie reasons. The main difference between these and conclusive reasons is that even true defeasible reasons don’t guarantee the truth of their target belief. Consider this example:
You are near the mouth of a cave looking at a rock formation just inside the cave. The formation looks red to you. This “red-looking” is a good (defeasible) reason for believing that the rocks are red. However, as we all know, lighting conditions vary in natural settings and can be deceptive. Thus, it could be the case that the rock formation isn’t really red; its red appearance could be produced by weird lighting filtering into the cave. Thus, although the red-appearance of the formation is a good defeasible reason for believing it to be red, the truth of this latter belief is not guaranteed.
What defeasible reasons do is to make the truth of the belief for which they are reasons probable. (That’s why they are also called probable reasons.) A red appearance of a rock formation makes it more probable that the formation is red than that it is not. Of course, you could acquire a further bit of information which defeats the strength of the reason (that’s why they are called defeasible reasons). For example, you could notice that there is a brilliant sunset outside the cave, which makes it likely that many even nonred things look red. Thus, the fact that the formation looks red to you is no longer a very good reason for believing that it is actually red, given that many nonred things will seem to be red in these lighting conditions.
There are actually two recognized kinds of defeating information, or defeaters: so-called rebutting defeaters and undercutting defeaters. In the example just given, noticing that there is a sunset is an undercutting defeater. It undercuts the evidential force of your original reason for believing that the formation is red. Given that you know of the red lighting, you now can’t fully trust that things have the color which they seem to have. Of course, the rocks could still be red. But you would need to illuminate them with a white light or take a sample and observe it during daylight to make sure. On the other hand, it could also be the case that a geologist tells you that the formation isn’t red, because there are no rocks of this color in the region. To the extent that you can trust her, you now have a rebutting defeater for your belief that the formation is red.
As mentioned above, defeasible reasons constitute the vast majority of the reasons we have for believing something. Conclusive reasons are limited to mathematics and logic. It is therefore extremely important to remember that talk of “physical proof,” for example, is a misunderstanding of the concept, if “proof” is being used with its technical meaning. A proof consists in providing conclusive reasons for some target belief. That means that it must be literally impossible for the premises (i.e., reasons) to be true and the target belief to be false at the same time. Such high standards of evidence are unavailable in the empirical sciences. We just can’t be certain. Even the best empirical evidence, on the basis of which we can form true premises for an argument, does not guarantee the truth of the target belief (i.e., hypothesis or theory; we’ll explore the difference between these below).
Good evidence makes the supported theory highly probable, often so probable СКАЧАТЬ