# Proportionality and Causal Dependence

James Woodward’s Causation with a Human Face defends three methodological proposals: (I) The empirical study of causal reasoning can fruitfully inform the philosophical analysis of causation, and vice versa. (II) Philosophers should attend to distinctions among different kinds of causal relationship, and not just the distinction between causal and non-causal relationships. (III) Our understanding of causation is illuminated by consideration of the practical function of causal concepts and reasoning. I am in broad agreement with Woodward on all of these proposals. My discussion will focus on Woodward’s final chapter on proportionality, and particularly highlight (II).

Yablo (1992a, 1992b) proposed a proportionality constraint on causal relationships. We may illustrate it with an example: A picture hook can support up to 20 pounds. I use it to hang a heavy framed picture weighing 32 pounds, and the picture falls off the wall. (Woodward (2008) gives a similar example.) Consider the following two claims:

1. The picture’s weighing 32 pounds caused it to fall.
2. The picture’s weighing more than 20 pounds caused it to fall.

According to the proportionality requirement, (2) is true and (1) is false. The problem with (1) is that it identifies a cause at the wrong “grain”—more specific than appropriate. Yablo uses proportionality to argue that sometimes high-level mental causes exclude low-level physical causes.

Woodward offers a modified definition of proportionality labeled (P). Here is an excerpt:

P…Suppose we are considering several different causal claims/explanations formulated in terms of different candidate cause variables V1…Vn involving some target effect or explanandum E… Then a choice of variable Vi (and of the dependency claims regarding in which Vi figures) satisfies proportionality better than an alternative choice…to the extent that those dependency claims avoid falsity…and omis­sion. (Woodward forthcoming, 369)

I wish to highlight the importance of the parenthetical clause.

In Woodward and Hitchcock (2003), we proposed a model of causal explanation in which an explanation of the falling picture might take the following form:

3a.       W = 32
3b.       F = g(W)
3c.       F = 1

is a variable representing the weight of the picture in pounds, F is a variable that takes the value one if the picture falls and zero otherwise, and g is a function that takes the value one for arguments greater than 20, and zero otherwise. (3b) has counterfactual import: it says that if W were set to w by an intervention, then F would be equal to g(w). There is a division of labor in (3): (3a) tells us what actually happened—which value of the cause variable was realized; (3b) tells us how the effect variable causally depends on the cause variable. Thus (3) acknowledges that there are different possible ways in which the effect might depend on the cause; for example, a different hook might be able to support more weight without the picture falling.

Causal explanation (3) perfectly satisfies Woodward’s condition (P): the information it provides about how falling depends on weight is both accurate and complete. Thus there is a successful causal explanation that identifies the specific weight—32 pounds—as a cause of the picture falling. We can also give a causal explanation satisfying (P) that uses a different variable, W*, which takes the value one if the weight is greater than twenty pounds, zero otherwise. But Woodward’s condition gives us no reason to prefer this explanation over (3).

I think that a tacit assumption of Yablo’s proportionality condition, and of almost all variants that have been proposed in the literature, is that there can no division of labor along the lines of (3a) and (3b). One and the same thing, the CAUSE, must perform both tasks: specifying what actually happened, and conveying information about how the effect depends upon changes in the cause. And I think the source of this assumption is the failure to appreciate Woodward’s methodological point (II).

I think there are reasons to prefer W over W* as a causal variable. Woodward stresses the importance of causal relationships that are highly invariant. One reason we value such relationships is that they are highly portable: we are likely to find them in a variety of contexts. My fellow commentator Tania Lombrozo has also emphasized the importance of portable causable relationships (Lombrozo 2010, Lombrozo and Carey 2006). I think a similar consideration applies to causal variables. W* is ad hoc. It is constructed specifically to be a proportional cause of the falling picture. By contrast, we may expect to figure in many different causal relationships. For example, suppose I want to know how much it will cost to ship the framed picture, whether I will be able to lift it, and whether I should buy a more expensive picture hook that supports 40 pounds instead of 20. It makes more sense to formulate a causal model with the single variable as a cause of all these other variables, rather than a model with several different variables for weight, each one proportional to a different effect. (Hoffmann-Kolss (2014) makes a similar point.)

An empirical question is whether humans employ causal representations in the form of (3). Do we employ separate representations of the cause variable, and of the way the effect depends upon the cause? Or do we instead try to capture both with a single representation? I do not think that existing literature, such as Lien and Cheng (2000), addresses this question. Indeed, I think it would be difficult to test. In part, this is because prompts employing the word “cause” may be unsuitable for eliciting representations of distinct causal relationships. And in part, this is because it is difficult to express the difference between (2) and (3) in colloquial English.

##### References

Hoffmann-Kolss, V. (2014). “Interventionism and Higher-level Causation,” International Studies in the Philosophy of Science 28: 49–64.

Lien, Y., and Cheng, P. (2000). “Distinguishing Genuine from Spurious Causes: A Coherence Hypothesis,” Cognitive Psychology 40: 87–137.

Lombrozo, T. (2010). “Causal-Explanatory Pluralism: How Intentions, Functions, and Mechanisms Influence Causal Ascriptions,” Cognitive Psychology 61: 303–332.

Lombrozo, T., and Carey, S. (2010). “Functional Explanation and the Function of Explanation,” Cognition 99: 167–204.

Woodward, J. (2008). “Mental Causation and Neural Mechanisms,” in J. Hohwy and J. Kallestrup, eds., Being Reduced: New Essays on Reduction, Explanation, and Causation (. Oxford: Oxford University Press), 218–262.

Woodward, J. (Forthcoming). Causation with a Human Face. Oxford: Oxford University Press

Woodward, J., and Hitchcock, C. (2003). “Explanatory Generalizations, Part I: A Counterfactual Account,” Noûs 37: 1–24.

Yablo, S. (1992a). “Mental Causation,” Philosophical Review 101: 245–280.

Yablo, S. (1992b). “Cause and Essence,” Synthese 93: 403–449.

## One comment

1. James Woodward

I thank Chris Hitchcock for his comments on my chapter on proportionality. I think that I agree with much although perhaps not all of what he says. As I see it, the distinctive feature proportionality attempts to capture has to do with the extent to which causal claims represent the the full range of dependency relations that are relevant to some specified effect. This in turn is closely related to issues about the appropriate choice of “level”, or “grain” in causal representation as David Kinney and Tania Lombrozo bring out in their comments on my chapter. (I will address their comments in a separate post.)

Return to the much discussed pigeon, Sophie, who pecks at red and only red targets and is presented with a scarlet target and pecks. It seems that (1) below is superior to (2):

(1) The red color of the target causes Sophie to peck

(2) The scarlet color of the target causes Sophie to peck

(2) fails to represent the fact that Sophie will peck at other red colors besides scarlet and in comparison with (1) seems to be at an non-optimal grain or level. Proportionality is supposed (at least in part) to capture observations like this.

As Chris notes, discussions in the philosophical literature have focused almost entirely on examples in which the representation of causal claims is in a “C caused E” format (as (1) and (2) are), so that the issue is conceptualized as choosing the right grain or level of abstraction for a candidate C, given a specified E. There is, however, another possible approach, involving a different representational format. As Chris describes, this makes use of the framework in Woodward and Hitchcock, 2003. Very roughly, given a candidate cause C and an effect or explanandum E, this framework separates out

(3) A generalization governing the behavior of the relevant system

from something like

(4) A specification of the initial conditions obtaining on the occasion of interest

and then showing that the effect explanandum is deducible from from these. (There are other requirements too, but let’s put these aside.) Chris’ observation, as I understand it, is that one can satisfy the spirt of proportionality by, so to speak, keeping (what might seem to be) an overly narrow specification of (4) but compensating for this by employing a governing generalization (3) that corrects for this. To use his example, given a hook that will support a picture of up to 20 lbs but nothing heavier and a picture that weighs 32 lbs and consequently falls, we might cite the fact that the picture weighed more than 20 lbs to explain the falling (thus satisfying proportionality as standardly understood) but we might also explain the falling in terms of a specification of the actual weight (32 lbs) (Chris’ 3a or 4 above) and a generalization (Chris’ 3b or 3 above) that says in effect that weights over 20 lb result in falling and weights under 20 lb do not. Here we retain the “non-proportional” description of the weight (32 lbd) but “correct’ for this via the generalization (3b) so that the fact that the falling depends on whether the weight is over 20 lbs is still captured.

I agree with Chris that discussions of proportionality in the philosophical literature have tended to assume that the appropriate representational format is of the “C causes E” variety and that they have neglected the alternative that he describes. I also agree that his alternative does as well at capturing the relevant dependency relations. Indeed I believe that I said this in CHF, Chapter 8 although perhaps this did not come through clearly.

Does this mean that the philosophical discussion around proportionality has been misguided (because based on an overly restrictive assumption about representational format)? This not the conclusion I would draw. Granting Chris’ observations, there still remain important issues about the appropriate grain or level to use in causal representation– the issue that is discussed in David and Tania’s commentary. Perhaps I am mistaken but I don’t see that these are fully addressed just by moving to Chris’ preferred representational format in terms of his 3a- 3c.

One way of bringing this out is to turn to a further implication of proportionality at least as I understand it. This is the idea that one does not have to go “beyond” what proportionality dictates in explaining an effect . That is, given a specified effect understood as a variable with a possible range of values, one does not have to provide more detail than is required to fully specify what those effect values depend on, which is what full satisfaction of proportionality requires. For example, to very high level of approximation, the values of a thermodynamic variable like the pressure of a dilute gas depend only on other thermodynamic variables like its pressure and volume– the values of these variables fully satisfy proportionality with respect to pressure. This means, as far as proportionality goes, that further more fine-grained information about the gas such as the exact position and momentum of its component molecules are not required if the effect is the value of the temperature variable. (As CHF and other recent papers of mine discuss, there may be nothing wrong with providing such additional information, but it is not needed.) This is spelled out in terms of a requirement, called conditional causal independence, discussed in CHF. This in turn has important implications for how one should think about the appropriate grain or level to employ in causal explanation (how much coarse-graining we should do.) As before we may think of these implications from both a normative and descriptive point of view. There is the normative question of when coarse-graining or neglect of lower level information is appropriate (and why) and also the descriptive question of when and to what extent people adopt coarse-grained causal representations and what the rules if any are that govern such choices. I follow Kinney and Lombrozo in seeing notions like proportionality as attempts to address such questions.

I conclude by turning to two other of Chris’ observations. Suppose one is interested in a causal structure in which, as before, whether a picture falls depends on whether its weight is more than 20 lbs but in which the weight of the picture also (differently) influences other variables such as whether I can lift it (maybe I cannot lift more than 50 lbs). In a structure of this sort, it may be desirable to represent the weight value in a fine-grained way (e.g., to the nearest pound) despite the fact that this is “overly specific” with respect to some of its effects like picture falling, since this fine -grained information is needed to account for the fact about lifting. (As I note in CHF, this observation, although perhaps not quite under this description, is made in Hoffman-Kloss, 2014). In other words, proportionality may impose different demands in terms of level of specificity, depending on which effects are incorporated into a model, and when there are several effects, this can push us in the direction of more fine-grained representations.

I agree with all of this but (again) don’t conclude that proportionality is a misguided notion. It is an important observation that some domains of inquiry or theorizing impose “uniform” proportionality requirements for a range of different variables. For example, as long as we are dealing with thermodynamic variables like temperature, pressure, volume and a few others, adoption of the grain or level associated with these variables is enough to fully account for variations in their values in terms of other variables belonging to the same thermodynamic family. In other words, this case is not like the picture falling/lifting case in which different effects demand different grainings of the cause. In general robust levels in science and perhaps more generally seem to require the identification of whole families of variables that fit or play together appropriately, in terms of satisfying at least to some extent, proportionality-like considerations. One consequence is that proportionality and other considerations relevant to choice of grain or level should probably be applied to groups of variables rather than to individual cause/effect relations.

Chris also asks the following interesting and important question Do humans employ causal representations in his preferred format 3a-3b, that is, with separate representations of the cause variable and the way in which the cause depends on the effect? I think that in scientific contexts the answer to this question is often “yes” and that is normatively appropriate to do this when possible. Making this sort of separation facilitates inference and understanding in a number of ways, some of which are described in Woodward, 2020. As Chris says, invariance- based considerations support such a separation. Whether and what extent such a separation is made in human causal representation in more ordinary contexts is of course a separate question to which we don’t yet fully know the answer, although the reader may recall that Cheng’s causal power framework incorporates at least in part such a separation.

Hoffman-Kloss, V. (2014) “Interventionism and Higher Level Causation” International Studies in the Philosophy of Science 28: 49-64.

Woodward, J. and Hitchcock, C. (2003). “Explanatory Generalizations, Part I: A Counterfactual Account,” Noûs 37: 1–24.

Woodward, J. (2020) “Flagpoles anyone? Causal and explanatory asymmetries. THEORIA. An International Journal for Theory, History and Foundations of Science. https://doi.org/10.1387/theoria.21921