Reading Diary

I read quite a lot of stuff.  On a good day I get to do little else.  Much of it is triage, but some of it might be quite interesting?

Classes, Why and How

Thomas Schindler, (2019) Philosophical Studies 176:4-7-435. 

Thomas (who I know from when he was a JRF at Clare College) advances a type-free theory of classes. This is not my field, and I’m hardly in position to evaluate the techical success of his attempt—though it seems plausible enough to me! My interest was piqued because (a) Thomas, and (b) his approach is to limit the “range of significance” of predicates or propositional functions, such that attempting to evaluate them at the excluded points (“singularities”) is impermissible, and yields no outcome. If one treats the problem cases—the set of all sets that aren’t members of themselves, for example—as such singularities, then one thereby avoids the paradoxical outcomes. (From one angle, this is Russell’s approach too, but abandons Russell's constraint that ranges of significance must be clumped into types.)  Thomas is inspired, he says, by some remarks of Gödel, but this approach also resembles my own favorite response to the semantic paradoxes, cassatio

Cassatio treats the paradoxical sentences as semantically improper.  The Liar, for example, has no truth value, because it fails to express a proposition and thus has no truth conditions. Crucially, cassatio does not ascribe some third, non-classical truth value (such as µ)—an approach which only invites so-called revenge paradoxes. (µ is not T, so “this sentence is not true” generates a paradox even if you try to assign it µ.) Rather, on cassatio, the Liar has no truth value at all, because it doesn’t assert anything. Calling it true and calling it false are both wrong, but non-paradoxical, in much the same way calling the planet Mars “true” or “false" would be.

As I say, I think Thomas’s approach is plausible; and I can’t evaluate its technial success. But from the point of view of a cassatio fan, I have two questions/suggestions: 

— Why are the singular points singular? A successful treatment of the paradoxes, it seems to me, needs to explain why things go wrong there, and not merely exclude them arbitrarily for convenience. (Though if we are arbitrarily excluding things, excluding fewer is better—on which count Thomas’s approach, if consistent, is better than Russell’s, for excluding only the paradoxical cases.)

Cassatio excludes the Liar on the basis that it viciously delegates its truth conditions to itself. If McX says “P” and I say “what McX says is wrong”, then there is no problem: I’ve delegated my truth conditions to McX’s utterance. But with the Liar, we follow the reflexive arrow of delegation around in a circle forever, never getting to a genuine assertion. 

I suspect that a similar approach can explain why the Russell set is problematic: the intension S = {x|x∉x}, when applied to S itself, circularly delegates S’s inclusion conditions to itself. So perhaps here cassatio can supplement Thomas’s approach.

—What about the Truthteller? An intuitive way to motivate cassatio is to think about “This sentence is true”. There is no paradox there: it is true if it is true, and false if it is false. But which is it? There is no answer to that question, because the Truthteller doesn’t make any real claim. Cassatio treats the Truthteller as improper for the same reason as the Liar: it doesn’t, despite appearances, ultimately say anything.

The Truthteller’s set analogue is S = {x|x∈x}, applied to S. Is it a member of itself? It is if it is, and it isn’t if it isn’t, so no paradox; but which is it? Again you follow a reflexive arrow of delegation forever without getting to an actual inclusion condition. 

What does Thomas’s approach have to say about that case? I would assume it is ripe for the same treatment, so that both problem cases are excluded. This is slightly more restrictive than excluding only the paradoxical cases, but still much less so than Russell’s.

Anyway, fun!

Decoding the Brain

Ritchie, Kaplan & Klein (2019) 70 BJPS 581-607


The subtitle of this paper is “Neural Representation and the Limits of Multivariate Pattern Analysis in Cognitive Neuroscience”, but I read it anyway. Its actually a very friendly paper which they could have called “MPVA decodability doesn’t entail explicit representation”.

The background is what they call the “decoder’s dictum”: that MVPA decoding of  neural activity tells you what information the decoded patterns represent. The paper argues that this is false, for two basic reasons. 

One is that MVPA decoding can pick up inessential correlations between the activity and the represented information. Those correlations might be in the stimulus—you can tell whether the subject is looking at up-right or up-left orientated lines, but perhaps you are picking up the output of edge detectors and things will fall apart if you make your stimulus patches square rather than round. Or they might be introduced by biases in the measurement procedure etc.

The second is that recoverability of information doesnt mean its represented in a usable form. Visual input, for example, is fully informationally encoded in early processing (V1)—if it wasnt, later stages would have to make it up. But its not usable in that form, its merely latent. Processing it into functionally available form is what later stages do. 

The authors note that correlation with behaviour helps nail down usable content, but even that isnt perfectly reliable. They suggest some furhter approaches which seem plausible enough to me.  These involve connecting the neurological analysis with psychologically derived similarity relationships among the stimuli, or in the other direction with behaviour. This gladdens my pluralist heart; it doesnt allow you to assign content based on neuroscientific data alone, but that is as it should be.

Birch, Are kin selection and group selection rivals or friends? (Current Biology 29 R433-438 2019)

This is a nice piece by Jonathan on the rivalry—and putative equivalence—of kind selection and group selection. He starts by noting that the well-known equivalence results have done exactly nothing to dissolve the dispute between kin-selectionists and group-selectionists. Why might that be? Jonathan’s answer is that the equivalence is purely statistical, and papers over genuine causal differences between the two. The essay has a useful historical summary of how generalisations of both approaches converged, but the upshot is this. You can decompose the mathematics of selection arbitrarily into (a) within- and between-group components and/or (b) direct and indirect (kin-based) costs and benefits. One gets you the multi-level Price equation; the other gets you rb>c.

The two decompositions have to be equivalent, because they are mathematical rearrangements of the same principle (viz, the Price equation with some assumptions). But arbitrarily dividing selection into within and between-group components tells you nothing about the causal role of groups. Nor does multiplying selective benefit by relatedness tell you anything about effects of kinship. (E.g. rb>c is satisfied for purely selfish traits, where r=1 for all beneficiaries.) 

So the stubborn disagreement is, quite properly, about the causal contributions of kinship and group structure. 

Birch closes with a positive proposal for identifying the relative roles of kinship and groups. He defines two scalar properties K and G. High-K populations are ones where whole-genome correlation is a high proportion of genetic correlation; since kinship but not selection can produce whole-genome correlation, that tells you whether the relatedness on which rb>c operates is due to kinship (as opposed to, say, green beards). High-G populations are ones which are clustered and isolated (borrowing from network theory). A populations position in K-G space tells you something about whether groups or kinship are likely to be selectively ert. (And other things besides: kinship-based relatedness is likely to be more stable than other kinds; really clumpy groups might be setting up for an evolutionary transition.)

Q: It looks to me like high K and high G are necessary for group and kin causation—properties cant be causal unless theyre genuine features of the populations, rather than arbitrary gerrymanders. But I wonder whether they are sufficient. In principle a population can have groups without those groups doing much of anything. Similarly a population could have a highly kin-marked structure without producing much in the way of altruism. (Consider any parthenogenetic species.)  So I suppose the ultimate question is still counterfactual/interventionist. Unless theres something relevantly causal in the definition of K and G that Im missing. 

Anyway, this is a very helpful piece, with an admirably concise explanation of the issue and background (with the formalism available but boxed off, which is excellent practice). Useful for philosophers as well as what (presumably, given its venue) is its intended audience of biologists.

It is also, of course, biolgical grist for the causal inference mill. I’m going to have a look at Okasha’s 2016 paper applying DAGs to the kin/group question next.

Defending Narrow Content

Anandi Hattiangadi’s review of Yli-Vakkuri and Hawthorne’s “Narrow Content” neatly takes on the core of their Mirror Man argument.

Mirror Man is a symmetrical being living in an asymmetrical world. He has in his mental lexicon two names, which refer to (perceptually and otherwise to MM) indiscernible people, located on his right and left sides. 

Y and H suggest that no purely internal assignment of content to those two names can get the truth values of MM’s thoughts right. Suppose that Leftie is a philosopher and Rightie is a scientist; then “Rightie is a scientist” is true and “Rightie is a scientist” is false. But the two thoughts are identical as far as narrow content goes, and they share all the relevant indexes (time, place, subject). So if content is narrow, and content (plus index) determines truth value, they should have the same truth value. The internalist can try to add parameters or indexes to distinguish them; but Y & H can just come up with more symmetries. (This last is the “parameter-proliferation” argument.)

AH responds that Y and are just assuming that “Rightie” and “Leftie" have distinct referents, and it’s not clear they are entitled to. For even an externalist about content thinks a name gets its reference via a mental act. Not any old causal contact with an object is a baptism; only ones accompanied by a deliberate use of a name for the object. (Similarly, transmission of names requires that the hearer intend to share the speaker’s reference.) But MM has no  way of constructing a mental act that is about Leftie and not Rightie, or vice versa.

The internalist upshot is that “Rightie” and “Leftie" do not determinately refer to Rightie and Leftie. (Perhaps they don’t refer at all.) The result is that the two thoughts have the same (lack of) truth value, and the argument fails.

AH considers, as a possible externalist response, “hard-wiring” neural connections between “Leftie" and MM’s mental demonstration of Leftie. This move has neural connections play a sub-semantic role; although content cannot distinguish the objects of the demonstrations, something more brute can do the trick.

AH responds that such connections are agent-level, and so not suitable for neural individuation—thoughts about Leftie and Rightie might even be realised by the very same neurons, in a connectionist brain.

***

Now, I am not fond of talk about hard-wiring in general (neurons are plastic, and thoughts are implemented dynamically anyway). But I think AH misses the point here. 

The key for most externalists is causal connection between term and referent. If a baptismal tokening of “Rightie” is in fact causally produced by a demonstration of Rightie and not by one of Leftie, and vice versa, then that is enough to break the tie. Presumably the two causal paths will run through neurons—one chain of activation running within the left side of MM’s brain, one on the right—but that doesn’t imply any neural hard-wiring, just actual connections. (Nor does it require the kind of overall functional isolation between MM’s two hemispheres that would make him two agents.)

And sure, the result will look weird in the case of MM, where the basis of the difference in reference is purely external. MM has no way of saying which of his names refer to which person. But MM is weird, and weird cases have weird consequences. The question is whether there’s a problem in principle with “Leftie" and “Rightie” referring distinctly while sharing internal content, and I don’t see one here.

For the record, I am not defending Y & H’s complete rejection of narrow content. I am an old-fashioned mixed-content fan. External connections are needed to get representation off the ground—you can’t define your way into meaning—and may well play a direct role in the semantics of natural-kind terms, etc. But narrow content—cognitive association between concepts—is also important. “Bachelors are unmarried” is true analytically, because the RHS is cognitively linked with the LHS in some appropriate way. What it’s an analytic truth about, on the other hand, has an ineliminable external element.

Naftali Weinberger, Path-Specific Effects (Brit J Phil Sci 70 (2019) 53-76

Weinberger makes a very plausible case that interventionism—tendance Pearl (2001)—can explain path-dependent effects in a way that probability-raising accounts of causation cannot. 

The problem of path-specific causation is illustrated by the “Hesslow case”. One side effect of birth control pills is to increase the risk of thrombosis; apparently they alter blood chemistry in a way that makes clotting more likely. But they also decrease the risk of thrombosis!  They prevent pregnancy, which also has thrombosis as a side effect. The net effect of these two causal paths might be positive (so the pill increases the chance of thrombosis overall), negative, or even exactly zero.  This situation is obviously different from a simple risk-raising or lowering effect—that of statins, say—in ways that are practically relevant. Suppose that taking the pill lowers the risk of thrombosis overall. Telling an infertile person to take it for that reason would be bad advice. The challenge for probability-raising accounts of causation is to explain that difference. How can taking the pill both raise and lower the chances of thrombosis?

Probability theorists have attempted to address this problem, of course, and Weinberger rehearses the debate between Nancy Cartwright and Ellery Eells about how to do so. I’ll elide the details here, but Weinberger’s claim is that the resources probability theorists can appeal to—differences in background variables and (per Cartwright) singular or token causation—are not enough. By contrast, Pearl’s approach allows for a neat system of distinguishing betwen the paths.  

Which is this. To determine whether (and indeed how much) the pill contributes to reducing the risk of thrombosis via the blood chemistry path, you intervene (in the Woodwardian sense) on the mediator of the other path. That blocks that other causal path, allowing you to see the effect via the path you’re interested in.

More formally, you set the value of the “pregnancy” variable to “isn't", breaking its causal relation to taking the pill, and look at the difference in the value of the outcome “thrombosis” variable when “takes the pill” is set to first “does" and then “does not". (That difference can be calculated using the stuctural equations in your model.) In the Hesslow case, the difference will be non-zero, which tells you that there is an effect along that path.

Things get more complicated when the other path is direct, that is, has no mediator. To see that the pill influences thrombosis via pregnancy, you can’t intervene on a mediator in the other path, since (let’s stipulate) there isn’t one. (Or perhaps there is one, but it interacts with pregnancy, and so shouldn’t be intervened on.) The framework can handle this.  Rather than blocking the other path, you fix its contribution, by holding the original cause fixed.  Then you intervene on the mediator of the path you are interested in as if there was a change in the original cause.  Since the other path is fixed, this allows you to see the impact of the path youre interested in. So in Hesslow you hold fixed that the pill is taken, and then intervene on the pregnancy variable to give it the value it would have if the pill hadn’t been taken. You compare the outcome with what happens if the pregnanacy variable is allowed to take its “pill taken” value.  Again, the stuctural equations will tell you that the difference is non-zero, so there is causal influence down the pregnancy path.

Weinberger’s framework also usefully distinguishes between “necessary” and “sufficient” path-dependent effects. The former compares the effect along that path to the total effect; the latter compares the effect along that path to the result where the original cause doesn’t happen. These will provide answers to questions with different contrasts. If you took the pill and got thrombosis, but not pregnant, you probably want to know how the risk would have been different if you hadn’t taken the pill (the sufficient effect along the direct path). But you might also want to know what it would have been if the pill hadn't prevented pregnancy (the necessary effect along the indirect path).

The necessary/sufficient terminology looks counterintuitive, but it fits nicely with the point that the total effect is the sum of the sufficient direct effect and the necessary indirect effect, and vice versa. (This point is made much clearer by the diagrams in the paper.) This allows a decomposition of the total effect that avoids the pitfalls of treating causes as simply additive—even where there is interaction between mediators along the different paths. (Although Weinberger warns against slipping from the possibility of mathematical decomposition to the idea that there really is an ontologically independent contribution along each path.) 

In all this, of course, Weinberger is very much in the spirit of Pearl’s own criticism of attempts to understand causation in terms of probability, or any other sort of correlation. (See chapter 1 of the Book of Why for a very friendly discussion.) But the detailed application of Pearl’s framework to the philosophical discussion of the Hesslow case, and other path-dependent effect cases, is independently valuable. It took me a little while to puzzle out some of the logic—the discussion of Eells versus Cartwright in particular had me scribbling on paper. It’s really helpful, I think, to pay attention to the negation load you’re imposing on your reader. (I appreciate I am also guilty here.) But this is an excellent, and I think correct, example of how the interventionist framework can help with longstanding philosophical puzzles.

John Donaldson “Vertical versus Horizontal: What is really at issue in the exclusion problem”—forthcoming in Synthese

By sheer luck, this followed on nicely from the Devlin Brown paper from yesterday.  Brown distinguishes between “horizontal” and “vertical” responses to the exclusion problem. Vertical responses look at the relation between the physical and mental causes. They try to show that the two are ontologically related in such a way that they their co-existence as causes is not really problematic—because they are not fully distinct causes. “Vertical” because that is how the relation between mental and physical causes is usually depicted in toy causal diagrams (as in the example below).  Horizontal responses, by contrast, look at the causal relation itself, generally depicted horizontally, or diagonally in the mental → physical case.


Donaldson’s argument is that vertical strategies are the only ones that can salvage non-reductionist physicalism. If vertical strategies can succeed, horizontal ones are unnecessary. On the other hand, a horizontal strategy—however successful its account of causation—cannot be enough, because it cannot address the coincidence aspect of (systematic) overdetermination by mental and physical causes. Why should a certain physical cause always be accompanied by a mental cause, whenever it does its work?  (Donaldson notes that this problem arises even if, as with Zhong (2014), you avoid overdetermination by not counting one of the properties as a cause. They're still always there!)

I agree with Donaldson’s distinction, and with his claim that a horizontal strategy cannot succeed alone. As I said yesterday, I think the key to avoiding objectionable overdetermination is that the two causes are not fully distinct; and the ontological link between them will also explain their co-appearance.  But it seems to me that a vertical strategy alone is not sufficient, either.  A suitably compatibilist account of causation is needed too.

Consider the reductive physicalist's response to the exclusion argument. They avoid overdetermination by identifying the (token) mental and physical causes.  This is the ultimate vertical strategy; there is no coincidence problem for identicals.  But it’s reductive physicalism.  Non-reductive physicalism needs the two causes to be ontologically indistinct, but it also needs them to still count as two causes. And for that we need to explain how their causal contributions can be distinct.  If the causal relation between the two causes and the effect is the same, then non-reductive physicalism looks like epiphenomenalism: the mental cause is not contributing anything.

What’s needed, then, is an account of causation on which ontologically connected mental and physical properties can both count as making separate causal contributions. In other words, a “horizontal” approach to the exclusion problem. 

Donaldson spends some time discussing the “proportionality” horizontal approach, adopted by both Zhong and List and Menzies (2009). I agree that this won’t do it—it avoids overdetermination by ruling out either the physical or the mental cause as a cause, where I want both to count. But I think a (related) approach of contrastive interventionism can do the job. There are interventions on the mental cause that change the outcome, and also interventions on the physical cause that change the outcome, and the two sets of interventions aren’t identical.  More on that next, I think. 

Christopher Devlin Brown, “Exclusion Endures” forthcoming in Analysis

This is an interesting paper which, together with John Donaldson’s forthcoming “Horizontal vs Vertical”, has usefully focused my attention on the real nature of the compatibilist’s solution to the causal exclusion problem.

Brown’s claim is that the compatibilist approach, advanced by non-reductive physicalists like Karen Bennett (2003), can also be exploited by dualists, as a way to avoid the causal-closure argument for physicalism. 

The causal exclusion problem poses similar challenges to the dualist and the non-reductive physicalist. They both want to say that mental causes are distinct from physical ones.  But, given the causal closure of the physical, they want to say that mental causes’ effects have sufficient physical causes as well. The combination makes mental properties look redundant: at worst, they are epiphenomenal and so not mental causes at all; at best, their effects are systematically overdetermined, which is bizarre. 

The “compatibilist” response from the non-reductive physicalist is to accept that mental and physical causes double up as causes of their effects, but then deny that this is objectionable—on the basis that the mental and physical causes are not wholly distinct, since the former depends ontologically on the latter.  That means the problematic overdetermination counterfactual—if the mental cause had not occurred but the physical cause had, the effect would still have occurred—is only vacuously true.  The antecedent can never be satisfied, since if the mental cause is removed the physical cause goes away with it. 

This approach doesn’t seem like an option for a dualist. Dualists thinks mental and physical are ontologically distinct. That’s kind of the point.

Browne’s suggestion, however, is that the dualist can suggest a different sort of necessary connection—one that is not ontological but nomological. Say laws of nature are metaphysically necessary, and there are psychophysical laws connecting the mental and physical causes. That’s enough to make the overdetermination counterfactual vacuously true, since removing the physical cause necessarily removes the mental one too. Why think the laws of nature are necessary? Perhaps because you accept dispositional essentialism: the idea that properties have their nomological roles necessarily. Those are both controversial views, and not ones I endorse.  But they are respectable positions, so the dualist’s position against the causal closure argument has at least been strengthened. 

Can the physicalist respond? One question is about the nature of the psychophysical laws. If the relation between mental and physical properties is not ontological (not constitution, realisation, etc) presumably the law connecting them must be causal. (What else could it be?) And more specifically, the physical property has to cause the mental one. (The other direction would seem to assume what we’re here to explain.)

But. Now the picture is one of redundant causation. The physical cause produces both a physical effect and a mental cause, which also produces the physical effect.


The systematic causal detour tbrough the mental property looks just as otiose and objectionable as an autonomous mental cause would be. Making that extra causal route necessary by way of dispositional essentialism does not look like an advance, even if it does handle the counterfactuals. Consider an example: say that throwing rocks at windows breaks them in the usual way, but also (with nomological force) causes a particular phenomenal experience in the thrower, which also (and independently) breaks the windows.

The moral here, I think, is that Bennett’s vacuous counterfactual analysis is just a way of handling the formalities. The substantive insight which the compatibilist physicalist can offer is the non-distinctness of the mental and physical causes. That is what makes the existence of multiple causes harmless. In other words, the exclusion problem needs what John Donaldson calls a “vertical” solution, on which more soon.

What is Social Construction?

Another gem from Esa Diaz-Leon.  I really enjoy Esa’s papers—which I’ve been reading since we were both looking at phenomenal consciousness: they’re models of clarity, carefulness and concision.  (As well as, as far as I can tell, being basically right about things.)  Here she distinguishes between “causal" and “constitutive" social construction, and points out that only the latter is inconsistent with the sort of things social construction is supposed to be inconsistent with: natural-kind status, biological realism, intrinsicality.  Which makes constitutive constructions the best things to lean on for social change.

One nit: at p1140 she puts the question as “is the property that all things that fall under X have in common just a matter of being classified in that way by individuals like us, or do they share an underlying nature or ‘essence’, independently of how we classify things?”  This seems to lump kinds produced by looping—where the process of classification causally produces properties—on the constitutive side, rather than the causal (where presumably it belongs).

Better Explained

Oy.  Things I have been missing all my life, and I’m not sure how I missed them.  Maths for people who like maths, but who think technical Wikipedia is mostly useful for illustrating the distinction between explanations and Ramsey sentences.

The link is to the post on the Fourier transform, which has given me the feeling of understanding.  “Enough circles make a shape” is as far as I had previously gotten, and tbh I’ve survived—but now I have an intuitive sense of how.  

Bonus: intuitive understanding of Euler’s formula.  For free. Seriously. 

Unfortunately it has not yet been possible to develop these ideas in a precise way.

Via Jeremy Butterfield (possibly unintentionally) this one is J. S. Bell describing, in an utterly accessible way, the six main ways of understanding the relation between the wavy and particley nature of quantum mechanics, and between the quantum and the classical.  Three ways are “romantic”, weird and paradoxical; the other three are related unromantic but sensible stories.  It raises again, but utterly fails to offer an answer to, the most basic and frustating question I have about the interpretation of QM: why are we not all pilot-wavers? 

(The title is Bells response to Wegners dualist picture, on which it is the intervention of extra-physical minds that collapses wave functions and, for example, forces particles to choose locations.  I suppose non-UK people might think Bell was being neutral there.)

PS I couldn’t find a non-paywalled version.  Should anyone read this and want a copy of this or anything else: my email’s over on the left there. 

https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/