The search for scientific knowledge extends far back into antiquity. At some point in this quest, at least by the time of Aristotle, philosophers recognized that a fundamental distinction should be drawn between two kinds of scientific knowledge that and why. It is one thing to know that each planet periodically reverse the direction of its motion with respect to the background of fixed stars; it is a different matter to know why. Knowledge of the former type is more descriptive and knowledge of the latter is explanatory. It is explanatory knowledge that provides scientific understanding of our world (Kitcher and Salmon, 1989).
Issues concerning scientific explanation have been a focus of philosophical attention from Pre-Socratic (Greek philosophy before Socrates) times through the modern period. “Scientific explanation” is a topic that raises a number of interrelated issues. According to the Stanford Encyclopedia of Philosophy, an assumption made in numerous discussions is that science sometimes provides explanations rather than “mere description” and that the task of a “theory” or “model” of scientific explanation is to characterize the structure of such explanations.
It is further assumed that it is the task of a theory of explanation to capture what is common to both scientific and at least some more ordinary forms of explanation so as to allow for the best possible level of understanding the phenomenon being explained. These assumptions help to explain why discussions of scientific explanation so often move back and forth between examples drawn from bona-fide science (e. g. , explanations of the trajectories of the planets that appeal to Newtonian mechanics) and ore familiar examples involving the tipping over of inkwells. According to Aristotle, scientific explanations are deductive arguments. But as Aristotle clearly recognized, not all deductive arguments can qualify as explanations. Even if one accepts the idea that explanations are deductive arguments, it is not easy to draw a clear distinction between those arguments that do qualify and those that do not. In 1948 Carl G. Hempel and Paul Openheim published an essay, “Studies in the Logic of Explanation,” which was truly epoch-making.
It set out, with unprecedented precision and clarity, a characterization of one kind of deductive argument that according to their account does constitute a legitimate type of scientific explanation. This became known as the Deductive-Nomological model and is also referred to by some as the Covering Law model. This article provided the foundation for the old consensus on the nature of scientific explanation that reached its height in the 1960s.
This model is based on the premise that working scientific explanations can be derived from laws that are created from the regular observation of phenomena, which can then be used to successfully predict the subsequent re-application of these laws. Thus, “…a phenomenon can be explained by deducing it from a set of premises that includes at least one law that is necessary to that deduction” (Hempel, 1965). This paper seeks to not only expound on the structure of this model and its various constituents, but to also critically examine its strengths and weaknesses in its use as a model of explanation in science.
According to the Deductive-Nomological Model, a scientific explanation consists of two major “constituents”: an explanandum, a sentence “describing the phenomenon to be explained” and an explanans, “the class of those sentences which are adduced to account for the phenomenon” (Hempel and Oppenheim, 1948). For particular events, the DN model ascribes to the following schema (Papineau, 1995): [pic] In this diagram C1 – Ck are statements describing particular facts which existed before or at the same time as the phenomenon to be explained. L1 – Lr are the general laws which justified inferring from explanans to the explanandum.
Both statements constitute the premises of the deductive argument form in which E is the conclusion and describes the original phenomenon to be explained. The horizontal line, which separates the premises from the conclusion, indicates that a logical deduction or inference is made from the top to the bottom. For the explanans to successfully explain the explanandum several conditions must be met. 1) The explanans must deductively entail the explanandum. 2) The deduction must make essential use of general laws. 3) The explanans must have empirical content. )
The sentences in the explanans must be true. The first condition means that accepted explanations are effectively deductively valid arguments; explanandum must follow deductively from the explanans. The second states that in order for the statement to be an explanation an argument must include one or more among its premises in such a way that without these premises the arguments made would not be valid. This is to ensure that a pseudo-scientific explanation that incorporates laws in an essential way to give the appearance of scientific explanation will not satisfy the D-N model.
Thirdly, the explanans, that is the laws and the other premises concerning surrounding conditions, must be empirically testable. Lastly, this condition ensures that the argument is sound because it is seems obviously unsatisfactory to explain something by appealing to a false premise. For example, we can’t explain the fact that a particular chemical compound dissolves in water by deducing it from the law that states that all compounds dissolve in water, because that law is false. Of course, knowledge of which laws of nature are true is fallible.
If it turns out that one of our cherished laws is false, then according to the DN model we thought we had an explanation but in fact we didn’t (Ladyman, 2002). Let us now examine an example where all the constituents of the above schema are in play to further understand the premise behind the model. The following example was taken from a Philosophy of Science lecture by on March 16, 2009: ? L1: Any time there is more than 2″ of new rain deposited in high wind conditions over a large watershed and the ground becomes saturated with moisture and there is no check dam, there will be flooding C1: On August 27, 2008, Portland received 18″ of rain and high wind from Gustav in a 24 hour period. ?
C2: The ground became saturated with moisture in the Rio Grande watershed ? C3: There is no check dam on the Rio Grande River ? E: The Rio Grande River flooded on August 28, 2008. This example is a good explanation. The explanandum (“E”) is explained by the sentences composing the explanans. The explanans compose of L1 (a casual law), and C1-C3 (statements of facts; conditions that existed), all of which are relevant to the explanandum.
With that being said, there have been several objections to this model, most of which are designed to show that Hempel’s conditions are insufficient; in other words, that an argument may satisfy all of them, but still not count as a scientific explanation. Some people also argue that the conditions are not even necessary, in other words that a proper scientific explanation needs not satisfy them all. These objections overlap but will be dealt with separately by the writer. One of such objection is that of irrelevance.
This is where we have an argument that satisfies the DN model; however, part of the explanans is not a relevant explanatory factor. Here the following is an acceptable explanation based on the DN model: ? All metals conduct electricity; ? Whatever conducts electricity is subject to gravity; ? Therefore, all metals are subject to gravity. This is a sound argument based on the DN model and the premises are general laws. However, the fact that metals conduct electricity is irrelevant to their being subjected to gravity.
This too is an acceptable DN explanation: ? All salt that has had a hex placed on it by a witch will dissolve in water. ? This sample of salt had a hex placed on it by a witch. ? This sample of salt will dissolve in water. However, this is not an explanation of the dissolution as it is irrelevant. The sugar would have dissolved even if it hadn’t been hexed (Kyburg, 1965). According to Wesley Salmon in his article, “Why ask ‘Why? ” the problem with the D-N account of scientific explanation is that it misunderstands the role of scientific laws in explanation and so misinterprets the role of causal relations in scientific explanations.
One obvious diagnosis of the difficulties posed by examples like the aforementioned ones above focuses on the role of causation in explanation. According to this analysis, to explain an outcome we must cite its causes and fail to do this. As Salmon, puts it, “a flagpole of a certain height causes a shadow of a given length and thereby explains the length of the shadow”. By contrast, “gravity does not cause metals to conduct electricity and consequently cannot explain why metals are conductors of electricity.
Similarly, the placing of a hex on a sample of salt does not cause it to dissolve in water, and this is why this example fails to be an acceptable explanation. On this analysis, what these examples show is that a derivation can satisfy the DN criteria and yet fail to identify the causes of an explanandum—when this happens the derivation will fail to be explanatory. Additionally, this model suggests that there is a strong symmetry between explanation and prediction. Indeed, if the model holds then the only difference between the two is that prediction occurs before the event has occurred and explanation afterwards (Papineau, 1995).
In this way, the model appears to subscribe to the symmetry thesis, that (a) ‘every successful prediction is a potential explanation’ and (b) ‘every successful explanation is a potential prediction’. However, whilst it is clear that (a) is sound, it is not clear that (b) is. The most widely used counter-example to (b) is that of Koplic spots, spots which appear as a precedent to the manifestation of measles. These spots can be used to accurately predict later manifestation of measles (albeit with some exceptions) but the spots cannot be used to explain why the manifestation occurs.
Hempel is vague on this point, arguing that (b) is ‘open to uestion’ but it could be argued that (b) is not refuted by this example. This problem can be solved by a distinction between scientific and non-scientific predictions. A scientific prediction will satisfy the symmetry thesis, for example after observation of a patient’s blood and discovery of the measles virus a scientific prediction can be made that the person is likely to develop measles if they have not already. However, Koplic spots cannot be used to scientifically predict development of measles because the Koplic spots cannot form part of a scientific explanation due to premise (ii).
A prediction can be made on the basis of such spots, but the spots are an intermittent result of the measles virus, so this must also be included in a full scientific explanation. After this inclusion, the explanandum (prediction of the symptoms of measles), follows from a proper subset of the explanans statements, i. e. the development of measles becomes predicted, scientifically, by the presence of the measles virus in blood, not by the Koplic spots. This distinction is essentially made through an appeal to causation, and as such is able to solve the problem for instances of causal scientific explanation.
However, it is clear that there are instances of scientific explanation that do not involve causation, explanations that appeal to laws of coexistence rather than laws of succession (Ruben, 1990). The gas laws are of this form, because they constrain the values of the pressure, volume and temperature of a gas at a given time. However, when we have such a law we seem to run into trouble because we can generate cases where two events seem to explain each other. For example, suppose it is a law that all animals with hearts also have livers and all animals with livers also has hearts.
Then from the observation that some particular animal has a heart we can explain why it has a liver. However, we could equally observe that it has a liver and use this fact with the law above to explain why it has a heart. Neither of these explanations is satisfactory. Now consider the following: ? A gas is sealed in a container of fixed volume and heated strongly. ? If the volume of a gas is kept constant then its temperature is directly proportional to its pressure. ?
Therefore, the pressure of the gas rose. This seems to be an adequate explanation yet we could just as easily reverse he explanatory order while still satisfying the DN model: A gas is sealed in a container of fixed volume and its pressure rises. If the volume of a gas is kept constant then its temperature is directly proportional to its pressure. Therefore, the temperature of the gas rose. However, this second explanation is intuitively wrong because the temperature increase caused the pressure rise and not the other way around (Ladyman, 2002). The presence of multiple causes also creates serious problems for the traditional analyses of causation that the DN model postulates. When this happens it is termed as overdetermination.
An event is said to be overdetermined when more than one set of causal conditions are in place but each of them is sufficient to bring it about. If a theory or model cannot accommodate multiple causes it will be useless to the sciences, especially social sciences where very few social theories can be adequately modelled using simple regression. Multiple regressions have been the mainstay of empirical social science and the use of multiple regressions imply that multiple causal factors are at work. An example from Mackie (1974) illustrates the problems of overdetermination:
Lightning strikes a barn in which straw is stored; ? A tramp throws a burning cigarette butt into the straw at the same place at the same time; ? Consequently the straw catches fire. Obviously, the fact that the straw was caught on fire was over determined by the burning cigarette butt and the lightning. The necessity analysis of causation requires that a cause be a necessary condition of its effect. Neither the lightning strike nor the cigarette butt in this example is necessary for the fire to start. If lightning had not struck, the cigarette butt would have caused the fire.
If the tramp had not thrown the cigarette, the lightning strike would have caused the fire. The lightning strike and the cigarette butt are certainly not jointly necessary. Either way then, by adhering to the necessity theory of causation, we would have to deny that either the lightning strike or the cigarette butt caused the fire (Sosa and Tooley, 1993). Another case in point where counterexamples have seem to show that an “explanation” could satisfy all of the criteria listed even though the explanatory information was completely irrelevant to the xplanandum, and that is in the case of causal pre-emption.
Causal pre-emption is where an event occurs which would cause the explanandum if given a chance, but something else causes the explanandum first, “pre-empting” the would-be cause (Brown, 2011). The following example gives a clear picture of this: ? Everyone who eats a pound of arsenic dies within 24 hours ? Marge ate a pound of arsenic ? Therefore, Marge died within 24 hours. Why did Jones die when he did? He ate a pound of arsenic five minutes earlier, and anyone who eats a pound of arsenic dies within 24 hours.
However, what the explanation does not mention is that Jones was run over by a bus immediately before his death. The arsenic would have caused his death if the bus hadn’t, but in fact the arsenic never had a chance for its effects to manifest first; the bus did so first. Hempel advocates ‘the thesis of structural identity’ according to which explanations and predictions have exactly the same structure; they are arguments where the premises state laws of nature and initial conditions.
The only difference between them is that, in the case of an explanation we already know that the conclusion of the argument is true, whereas in the case of a prediction the conclusion is unknown. For example, Newtonian physics was used to predict the return of Halley’s Comet in December of 1758, and once this was observed the same argument explains why it returned when it did. However, there are many cases where the observation of one phenomenon allows us to predict the observation of another phenomenon without the former explaining the latter.
For example, the fall of the needle on a barometer allows us to predict that there will be a storm but does not explain it. Similarly, the length of a shadow allows us to predict the height of the building that cast it, and if we know the period of oscillation of a pendulum we can calculate its length, but in both these cases the latter explains the former and not the other way round. The cases of pre-emption and overdetermination above are also examples where, contrary to the symmetry thesis, there seem to be adequate predictions that fail to be adequate explanations.
Furthermore, there seem to be adequate explanations that could not be predictions. For example, evolutionary theory explains, but it cannot usually make specific predictions because evolutionary change is subject to random variations in environmental conditions and the morphology of organisms. Probabilistic explanations offer further examples where prediction and explanation seem to come apart since, when the probability conferred by the explanans on the explanandum is low, we cannot predict that the explanandum is even likely to happen, although we can explain why it did afterwards.
As a model of scientific explanation, it is has the responsibility of not only describing the event, that is the explanandum, but provide detailed statements of explanations (known as explanans in this model) that are organized logically. Also according to Aristotle, be more than “mere deductions”. By providing logically, deductive explanations using laws and existing conditions the model has succeeded in providing detailed statements about an event.. These explanans must not only be detailed but be true in order to satisfy the DN model.
However, how does one denote those laws or hypotheses that are factual and those that are not? This reliability can never be established with absolute certainty. It is sometimes possible to eliminate bad hypotheses by using them as the premises of a deductive argument predicting that particular consequences will follow from a particular set of circumstances and then showing that the predicted event does not, in fact, occur. But if the events turn out as predicted, that only tends to confirm the hypotheses; it cannot prove their truth.
However, this model is often satisfied even if the hypotheses are false. Additionally, although the model reflects cause and effect and a detailed deductive explanation about an event, that is if the hypotheses are true, the result still cannot explain the cause. It therefore fails to surpass what Aristotle referred to as the “threshold of mere deductions”. To add to this fact, the hypotheses may not be relevant to the event, but the model would still be satisfied as long as the event is “deductively mentioned”.
Another problem is that Science, as some would say, cannot be reduced to “general laws” utilized by this model. In this regard this model is seen as not being utilized but as being aspirational. Therefore, this model can be fully utilised when all knowledge of the unknown has been garnered, and all events can be reduced to general laws. Obvious problems exist here not only in natural sciences but in social sciences where humans and the society are ever evolving. Empirical vidence typically underdetermines scientific explanation, leaving us with multiple hypotheses, any one of which would account for the facts. While it is highly evident that the DN model and each of its successors have been flawed, this should not obscure the fact that this theory has brought real advances in understanding which succeeding models are required to preserve. Debates amongst philosophers have gone on for decades on “the right model of scientific explanation” and will continue to do so as the writer believes that there is no model that is or will be the “perfect model” of scientific explanation.