Reading Diary

I read quite a lot of stuff.  On a good day I get to do little else.  Much of it is triage, but some of it might be quite interesting?

On an attempt to break the symmetry of the Metamodal Ontological Argument

“The Reverse Ontological Argument” James Henry Collin, Analysis 2022


This is an interesting little paper which claims to break the symmetry of the modal ontological argument, or MOA. (I prefer to call it "meta-modal", they’re all modal one way or another, but the acronym’s the same.)

The Argument

The MOA is a clever little device made famous by Alvin Plantinga. It uses the existence of reasonable disagreement about the existence of God, as the basis for an argument that God must exist. 

Let’s agree, it starts off innocently, that both theism and atheism are coherent. Neither side is flat-out contradicting itself, and even though they disagree they can at least understand each others’ positions. (If you think either atheism or theism is literally incoherent, this argument is not for you.)

So the atheist should agree that it might have been true that God exists, even though in fact it isn't. In possible-worlds talk, there is at least one world in which God exists.

Given that concession, the MOA pounces. God, it points out, is defined as a being that exists necessarily. Of course, you can’t infer directly from the definition that he exists; things can’t be defined into existence. But if there is any world where he exists, then by definition he exists necessarily in that possible world. And that means, given plausible assumptions about modal logic, that he exists in every world. Which means he exists in this world, the actual one. 

Tah-dah: the possible existence of God, conceded by the atheist, entails his actual existence. Checkmate, athiests.

But! The MOA is symmetric. Just as the atheist has to concede that there’s a world in which God exists, so the theist has to concede that, since atheism is coherent, there is a world in which God does not exist, and atheism is true. But (as we just saw), if God exists in any world, he exists in every world.  So if there’s one world in which he doesn’t exist, he doesn’t exist in any world. Checkmate back!


Breaking the Symmetry

Collins claims to have found a way to break the symmetry: to block the reverse argument while allowing the original MOA to proceed. He starts by pointing out that “God exists” and “God does not exist” are both a posteriori necessities, in Kripke’s sense. (If they were a priori necessities, there would be no need for the MOA, which proceeds from the possibility of coherent disagreement.) 

And once a posteriori necssities are in the picture, the inference from coherence (or conceivability) to possibility is complicated. If you think water is essentially H2O, for example, then although it’s conceivable that it could have had some other composition (XYZ), it’s not possible. Since both H2O and XYZ are conceivably water, the only way to work out which of them is possibly water is to find out, empirically, which of them is in fact water.

So far, so still symmetrical. But Collin now appeals to an idea about the relation between God and physical objects. According to at least some theistic views, physical objects necessarily depend ontologically on God. The claim that physical objects depend on God is, like theism itself, an a posteriori necessity. It is coherent, and so is its denial. But what isn’t coherent is that both (a) physical objects depend on the existence of God and (b) there is a world in which physical objects exist but God doesn’t. So the atheist claim that there is a world without God implies that physical objects don’t necessarily depend on God. But there is no empirical warrant for that claim! Nothing weve learned about physics tells us that physical objects arent ontolgoically dependent on God. So the atheist cant help themselves to it, and so isnt entitled to assert the possibility of a world without God

But isnt this also symmetrical? No, says Collins. The theist, by contrast, can accept that physical objects are not necessarily dependent on God. So that, even if God exists, physical objects could be ontologically independent of him, even essentially so. So the theist claim that God exists in a world does not entail either physical dependence or its converse. So the theist remains entitled to their possibility claim, while the atheist is not.

This is ingenious, if odd. God is supposed to be the creator of all things! But perhaps being a creator isn’t part of his essential nature the way that existing is. Or perhaps divine creation is a strictly causal relation after all. Anyway, Collin’s argument here raises a striking issue:

"If the actual physical things turn out to be, for instance, essentially non-dependent, such that their existence cannot depend on anything else, this also does not exclude the possibility of a perfect Being. Maximal greatness does not require being able to create that which is impossible to create, so the existence of a perfect Being would not require the dependence of physical things on that Being in worlds where those physical things exist.”

This is an interesting limitation on divine omnipotence! God, it seems, is limited by a posteriori necessity as well as by logical. That means, in particular, that God is not in charge of essences. Where, then, does water get its essential nature? Where, in the example Collin works with, do we humans get our essences of origin?


Responses

This is an intriguing argument. But unsurprisingly to me, I find myself unpersuaded. Here are some thoughts on why:

[1] Collin is rather helping himself to an a posteriori necessity here. Physical objects depend essentially on God isnt something that has some independent plausbility. You might come to believe it as a consequence of theism, but its odd to go the other way. Now Collin is careful not to assert that dependence, of course, but implausible a posteriori necessities are cheap, and offer the possibility of revenge. 

I suggest: physical objects depend essentially on the absence of God. That is, suppose they are a posteriori necessarily inconsistent with the existence of God. That seems just as coherent as Collins principle. But it creates a mirror situation.  The theist has to deny it: the existence of any world with a God entails the principle is false. And the atheist can remain agnostic: maybe physical objects co-exist with his absence but dont depend on it.

[2] The symmetry-breaking a posteriori necessities here are essence claims. People essentially originate in a particular sperm/egg combo; physical objects essentially depend on (or are essentially independent of) God. They are not like “Cicero is Tully”, where the necessity comes from the less controversial necessity of identity. An atheist who doesn’t believe in a posteriori essences (such as myself, some days) needn’t be worried by them. (I take it “God exists necessarily, if he exists at all” involves a definitional, a priori essence. What’s a posteriori is the antecedent, that he exists at all.)

[3] Don’t we have empirical evidence in favour of  physical objects not ontologically depending on God? Arguably, all of physics is such evidence. We’ve managed to do a tantalisingly almost-complete job of describing and predicting the phenomena, without appeal to God. That’s pretty good evidence that they don’t depend on God, for the same reason we’re justified in thinking they don’t depend on monads. Collin’s argument seems to depend on a radical separation of metaphysics and science, such that science always leaves metaphysical claims open. (Fine-tuning? Not a fan, personally, but at any rate that tells in favour of a designing/causing creator, rather than an ontologically sustaining one.)


The Original Argument

Of course, the symmetry objection is just a placeholder for an actual refutation of the MOA. If the symmetry objection is right, the MOA can’t work; but how exactly does it go wrong? A real diagnosis has the advantage that it can still work even if, as Collin claims, the reverse argument fails for some reason. (We can pull up the symmetric ladder behind us.)

The answer to my mind is simply that the meta-modal move is improper. Since God exists necssarily, if at all, there is no such thing as a world in which he exists. His existence cannot be witnessed by a single world, because no single world has the resources to represent his necessity. To say God exists at all is to say something about the space of all possible worlds, not just one. The atheist should simply deny, therefore, that there is a possible world in which God exists. (Likewise, the theist should deny that there is a possible world where God does not exist.)  Not on the basis that theism is incoherent—we can still accept that there is reasonable disagreement to be had. But its coherence can’t be represented by the formula "∃w: Theism is true at w. Its coherence comes instead from the intelligibility of the claim that "God exists" is an a posteriori necessity. We’re arguing about a necessary being; that argument has to be had in terms of necessity. 



Superdeterminism part 2—Bell’s inequality

(What follows is an attempt at an intuitive explanation of Bell’s inequality, initially for my own benefit. The Bell’s Inequality pages on wikipedia are an excellent example of the distinction between a definition and an explanation.  There are some good simplified accounts out there, but to my mind they don’t quite get at what’s going on.)


The starting point is that quantum mechanics, as a theory, is indeterministic. All it can tell you is the probability that there will be a particle at a certain location at a certain time; it offers no way to say definitely whether it will be there or not.1 Even complete knowledge of all the quantum-mechanical details—which is humanly impossible for a system of any real size—would still only yield a probability.

The natural response, for the deterministically inclined (like Einstein), is that quantum mechanics must simply be incomplete.  Some additional features of the physical situation, ones that aren’t described by QM, determine where the particles actually end up going. Once physicists understand those “hidden” variables, order will be restored and physics will be normal again. God doesn’t play dice; he can be mysterious, but he’s not capricious.

The alternative is that QM is complete, and reality is indeterministic. The reason QM can’t tell you where particles will go is because nothing determines where they go: it’s genuinely random.

Obviously, one way to resolve this debate would be to find the hidden variables. On the other hand, not finding them doesn’t prove the determinist wrong—perhaps we just need to keep looking. So how can the indeterminist win the debate?  Scientists want to find explanations of things; that’s kind of what they’re for. How can the indeterminist persuade them to give up the search, and accept that QM is the best we can hope for? 

Bell’s inequality, surprisingly, offers an answer: a way for the indeterminist to argue that no such hidden variables do or can exist.  

The argument goes something like this.

 • If, as the determinist believes, what we see is determined by factors that are fixed in advance, then our observations have to obey some simple rules of arithmetic.  

• So, if the results of experiments vary in a way that breaks those rules, then what we’re observing must be genuinely random. 

• The results of experiments do indeed break those rules.

So what are these rules? If you search up “Bell’s inequality” you will be shown some fiendishly complicated formulae.  Fortunately, the arithmetical principle involved is simple; the complexity comes from wrestling it into a form that can be tested by quantum mechanical experiments. 

Here’s the principle: if you have any three numbers a, b and c, the difference between a and c is equal to the sum of the difference between a and b, and the difference between b and c.  In a formula:

(a - c) = (a - b ) + (b - c)

That’s very simple algebra: the right hand side has a positive and a negative b in it, which cancel out, leaving the left hand side.

Now, if you look at the absolute values of the differences, you get an inequality: the right side has to be the same or larger than the left.2 Formally:

|a - c| ≤ |a - b| + |b - c|

Informally, this is also quite intuitive. As an illustration, let’s say the difference between my height and my wife’s is two inches, and that there are three inches between my wife and my son. How much difference is there between me and my son? Well, the most it can be is if you stack the two differences on top of each other: 




That’s the stereotypical family, with the wife having the middle height. If I have the middle height—as in reality—the difference between us will be less, just one inch:



But there’s no way the difference can be more than 5 inches.


That right there is the essence of Bell’s inequality.  If you accurately measure a set of differences, and your readings don’t satisfy the inequality, and the laws of arithmetic remain in place, then the values of what you’re measuring cant have been fixed before you measured them. 

That doesn’t quite mean that the values are random. The values might simply have changed, in an ordinary deterministic sort of way, while you were doing all the measurements. So experiments testing Bell’s Inequality have to go to great lengths to ensure that nothing gets changed during the process—isolating the experiment from outside influences, and making sure that the process of measuring one value doesn’t affect the others.  If you can rule out Skander growing during the experiment, and yet hes still 7 inches taller than me when he stands next to me, then you’re left with the possibility that we don’t have determinate heights after all.  The only other ways to avoid indeterminacy—the so-called “loopholes“—seem even less plausible than randomness. (Much more on that later.)


So why are the formal statements of Bells inequality so complicated? Its to do with finding an instance of the basic inequality that contradicts QMs predictions. (QM doesnt say every measurement violates the inequality.) It also involves some more quantum mechanical absurdity to do with measurement. More on that next.


[1]  Here I’m treating location as the property of interest. The same is true for other properties, such as spin and momentum.

[2] The "absolute value" of the difference between two numbers is just how large it is—regardless of which number is bigger, and so whether the difference is positive or negative.


Superdeterminism part 1

Last year, Sabine Hossenfelder and Tim Palmer published “Rethinking Superdeterminism” (Frontiers in Physics, 8, 139), a startling defence of determinism in quantum physics.  Hossenfelder, who is a physicist with the Frankfurt Institute of Advanced Studies, is well known as a critic of the current direction of physics research. (You can go to her blog, YouTube channel or very readable recent book “Lost in Math” for more, but the book title gives you the general idea.) Palmer is a physics professor at Oxford, a Fellow of the Royal Society and a Dirac Medal winner for climate modelling. If you’re looking for a critique of the conventional wisdom in physics—one that isn’t based on crankery or amateur incomprehension—there aren’t many better people to get it from.

Their target is the apparent randomness of Quantum Mechanics.1 Notoriously, for example, QM refuses to predict for certain where particles will go. All it will give you are odds, statistical predictions about where they are likely to be found next. To find out where a particle actually goes, you have to wait for it to go there and take a look.2 

This randomness famously offended Einstein, who insisted that “God does not play dice with the Universe”.3  But the great theoretician was defeated by experiment. Starting in 1969, physicists have observed violations of a formula known as Bell’s inequality, which can only happen if events are random (unless something even weirder than that is going on). Since then the consensus of physicists has been that, at the fundamental level, shit just happens.

So how do H&P deal with Bell’s inequality? By saying one of the even weirder things is going on. 

The Bell’s inequality experiments depend on some assumptions about the experimental setup.  Those assumptions are very plausible, but they are still assumptions. Over the years, physicists have tried to confirm them formally, and so close the remaining loopholes for determinism. But some theoretical outs remain. 

The assumption that H&P target is, basically, that fundamental particles don’t care about physicists, and, in particular, that they aren’t coordinating their behaviour to make it look to physicists like they are acting randomly when they aren’t. 

Imagine that whenever physicists set up devices to measure particles, they encountered particles with properties that were just right to screw up those experiments and make it look like the results were random.  If the physicists had measured different features of those particles, they would have seen ordinary deterministic behaviour. But if they’d set up their experiments to measure those other properties, different particles would have found their way into the instruments, ones with properties that would make those experiments seem random.

That seems… pretty bizarre.  Obviously microphysical particles don’t literally care what observations physicists make.  Electrons have no interest in defending an orthodox interpretation of QM, or indeed any interests at all. And, conversely, the physicists don’t know what the particles’ properties are until after they’ve run the experiment, so they can’t be making their set-up decisions based on the particles’ properties.  Which suggests that… something… would have to be making physicists’ decisions for them, in order to ensure they measure the right properties to get apparently random results. Even worse (!) experiments have shown that whatever that something is, it must have done its work billions of years in the past.

With that setup, you might be surprised to read that I think Hossenfelder and Palmer have done an excellent job. Indeed, they’ve convinced me.  Not that superdeterminism is true—the actual physics of superdeterminism is still in its early stages, and might still prove fruitless. But that it’s a perfectly reasonable approach, and, in fact, less implausible than the alternatives.  Moreover, it seems to me that their approach might shed light on some long-standing philosophical puzzles. 

So: astrology on steroids and why I’m now a one-boxer. [To be continued…]



[1] There are deterministic approaches to fundamental physics besides superdeterminism, but these are weird in other ways. One is the “many worlds” interpretation of QM, where all the possible outcomes occur, but in different universes; which seems like an expensive way to save determinism. A less fantastical approach is called pilot-wave mechanics; but like the conventional view this is non-local”, which means that events in one place can have effects in distant places instantaneously, breaking the light-speed limit imposed by Relativity. Superdeterminism tries to rescue both determinism and locality.

[2] Of course in everyday life the number of particles involved in events is so huge that the averages rule; ordinary objects behave in predictable ways.

[3]  For the benefit of any Einstein-theism-truthers, his actual words were ‘The theory [QM] produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.’ https://aeon.co/ideas/what-einstein-meant-by-god-does-not-play-dice

Functions are historical, so there

Justin Garson, Putting History Back into Mechanisms (forthcoming, BJPS)

This is in large part a forceful reassertion of the role of history in functions, and thus in biological mechanisms—which are functional in that they are mechanisms for things.[1] I’m very sympathetic.

The main argument is the familiar one from malfunction. An item’s function can’t just be what it does, since then malfunctioning—having a function but not performing it—would be impossible. That means you have to look beyond something's current operations to determine its function, and the obvious candidate is its past operations. Garson points out that the main alternative, appealing to its current operation in other organisms—such as Boorse’s “species-typical” operation—is blocked by Neander’s “pandemic” problem. If some disease blinded the entire human species for a few days, we would still (especially!) want to say our eyes were all malfunctioning. As Boorse acknowledges, the only solution is to appeal to species-typical operation across time, which makes history relevant again.

Garson draws two morals. One is epistemic: there's no escaping the need to look back in time to determine functions. That doesn’t mean looking all the way back to the (likely obscure) evolutionary origins of mechanisms; on Boorse’s view, it may only involve looking back a little way. 

The second is metaphysical: one shouldnt equate mechanistic explanation with constitutive explanation. Mechanistic decomposition looks very much like a constitutive explanation, but the historical element in mechanisms means one needs to take care. Garson bites the Swampman bullet—if the world were five minutes old, nothing would have a function (p.27). So mechanistic explanations don’t involve instantaneous supervenience.

Just two thoughts from me. One is that all of this follows from a concern for malfunction. If one is sceptical of the normative in biology, one might do without that. Garson thinks the pluralist views of people like Godfrey-Smith are vulnerable to the pandemic objection. But granted the courage of one's interest-relativist convictions, one could maintain that while eyes are interesting to us because they used to see (or would be able to see, but for...), that’s a feature of us, not them. I don’t like that approach, because I think we need normativity for semantics and so for being interested in things, but for all Garson says it seems to be available.

The other is what I take to be the superiority of Boorse’s causal contribution aproach to function, even in its historical version, to one based on selected effects. A weakness of selected-effect function has always been the possibility that the effect in question was never actually selected. Has there really been selective competition between members of a species with and without hearts? Probably not, since once you get to the point of having hearts at all, you tend to need them. Boorse’s approach avoids that problem, by (implicitly at least) appealing to counterfactuals. If your ancestors had been missing hearts, it would not have been good for their reproductive potential.[2]

I’m glad to see this paper, and I hope its lesson is taken to heart.




[1] This sense is to be distinguished from a broader sense, familiar from other sciences with no normative notion of function, which roughly just means causal chain. Perhaps I am reluctant to accept correlational evidence that, say, gum infections cause Alzheimer's until I am presented with at least a plausible mechanism for it to do so; that’s the purely causal sense of the word, since nothing has the proper function of causing Alzheimer’s.


[2] In fact there is often such selection, it’s just so effective we don’t notice. Anencephalic babies don’t reproduce; that’s intra-species selection for cerebra. But it would be odd for assignments of function to turn on whether such defects actually occur.

Ontological Indeterminism?

Physicalism Without Supervenience, Lei Zhong, Phil Studies (2021) 1529-1544


The causal completeness of the physical doesn’t require that every event have a sufficient physical cause: physicalism is quite consistent with genuinely chancy causation. After all, what physicalism ultimately requires is that nothing have a non-physical cause. Once the physicalist has accounted for all the causation going, indeterminism is neither here nor there.

Zhong transfers this insight from the causal relationship to the ontological dependence of higher-level phenomena on the physical. He argues that that dependence, similarly, doesn’t require ontological determination. In other words, higher-level phenomena can wholly depend on the physical without being determined by it.

If true, that would free physicalism from the onerous burden of supervenience. Which means, among other things, it would no be longer threatened by zombie arguments! Nor would it be dogged by the difficulty of establishing supervenience empirically (which I think Zhong over-eggs a bit—both parsimony and induction help with what he calls the “establishment question”). 

Zhong is correct, I think, that ontological dependence doesn’t require determination. Those are distinct relationships. But I think he’s wrong that physicalism is compatible with mere ontological dependence.

Zhong (p.1538) mentions a counter-argument from Witmer: if X isn’t sufficient for Y, then something else over and above X is required to make it true that Y. If X is “the physical”, then the something else is extra-physical, and physicalism is false. Zhong thinks Witmer has ignored the possibility of indeterminism: that nothing else besides X makes Y the case, things just turn out that way.

But: even if there is no third party Z that combines with X to produce Y, you still need something over and above X to make it the case that Y obtains. Namely, Y itself! Y is not part of the physical base (by hypothesis), nor is it merely a metaphysical consequence of the physical base (as wholes are to the arrangement of their parts, or realisations to realisers). In Kripke’s famous metaphor, God still has some work to do, after creating X, to put Y in the world.

Interestingly, Zhong mentions the creator-God metaphor (p.1539):

If God rolls dice to see what effects will follow the causes, it should also be okay for God that the higher-level phenomena arise out of the lower-level bases as a matter of probability.

This seems right; God could indeed roll dice for vertical as well as horizontal relations. But notice the shift in agency. God’s role here is limited to “being OK” with high-level phenomena arising chancily. But the point of the metaphor is that God is the creator; things aren’t there unless he puts them there. If God inserts Y after rolling dice—rather than out of some deliberate purpose—he is still doing the extra work.

Coordinate translations of egocentric content

When Lingens meets Frege, Jens Kipper

Phil Studs 178, 1441-1461 (2021)


It’s the 80s again; this is one of two papers in this issue of Phil Studs on egocentric content. (And there’s another one on reference and incomplete definite descriptions to boot.)  This one suggests an epistemic two-dimensional solution to the difficulty Robert Stalnaker’s semantics faces when egocentric content (“Lingens” cases) co-occurs with errors of identification (“Frege” cases). Kipper's overall position seems sensible enough—I’m strangely fond of two-dimensionalism—but this is another case where my reaction is “what’s all the fuss about”, and absent a pandemic I’d buttonhole a specialist at a conference and ask what’s going on.

My particular puzzlement here is this. The initial problem for Stalnaker was how to fit egocentric content into his model of communication, which involves the participants coming to share beliefs. Egocentric beliefs can’t be communicated that way; if I say “I am Adrian Boutel” you’d better not come to share that belief. Stalnaker’s response is to deny that there really is egocentric content; all content can be explicated in terms of sets of possible worlds, if only you’re allowed to appeal to haeccaeties and singular contents. There are egocentric belief states, however, which are ways of being related to objective (albeit haeccaetistic and singular) contents. Stalnaker then had a dispute with Lewis, to which, long story short, Kipper is responding.


Stalnaker’s abandonment of egocentric content seems overly dramatic, though. If I say to you (in person) “I am Adrian Boutel” the pure egocentric content of my utterance is where the centre of the actual world is. You can’t come to believe that, on pain of criminal liability. But what you can, and do!, come to believe is that “the person in front of me is Adrian Boutel”.  That’s also egocentric—specifically, diectic. It amounts to a coordinate translation of my egocentric content, adjusting to the different centre of the world from your perspective. 

That is, I think, in the spirit of Stalnaker’s account of communication. No we don’t come to share belief contents, but our believed egocentric contents are directly inter-translatable. 


It also seems a lot simpler than Stalnaker’s approach; not only does he need singular thought, haeccaeties, and belief-states, but he also gives worlds multiple centres: your centre and my-centre-as-believed-of-me-by-you.  (And in general one centre for each participant in the conversation.) Removing all that seems to avoid the issues Kipper goes on to deal with. But what do I know! I’d love to be educated…

God’s foreknowledge depends on our actions

From the fixity of the past to the fixity of the independent—Andrew Law, Phil Stud (2021) 178:1301–1314


Here’s a nice discussion of the question of the fixity of the past, which has relevance beyond a theological context. 

In general, I have no dog in this fight, and not just because I don’t believe in God. God’s foreknowledge is a problem for Libertarian views of free will. Suppose freedom requires that there be no fact of the matter about what we will do before we do it. But if God knows what we will do before we do it, then there is a fact of the matter, and therefore our actions aren’t free. (And if he doesn’t, the argument goes, he’s not omniscient.)

To a compatibilist about free will, like me, this is no problem. If freedom is compatible with our actions being causally determined by the past, then it doesn’t matter if some Laplacian demon, or God, is able to exploit that to predict them. 

The argument from the fixity of the past backs this problem up in a way that resembles Van Inwagen’s consequence argument. If God knew what I would do before I did it, the argument goes, then my doing something else would require me to change a fact about the past, namely what God thought beforehand. And I can’t change the past!

This also widens the reach of the agument slightly. Consider an indeterminist block universe, where there is a fact of the matter about what I will do in the future, but it isn’t determined by the past. If your view of freedom requires only the absence of determinism, then you might not be troubled by God knowing the future; but you could still be troubled by the impossiblity of changing God’s past knowledge. (Just as some compatibilists are worried by Van Inwagen.)

Replacing the fixity of the past with the fixity of the independent offers a way out of this. The idea is that what is fixed is not the past as such, but things that don’t depend on your actions. If God’s knowledge is the result of your actions, then it depends on them, and so shouldn’t be regarded as fixed.  

Law gives several reasons for preferring FI to FP, which are I think persuasive on (non-theological) metaphysical grounds. The fixity of the past depends on various apparently contingent assumptions; in a possible world where backward causation is rife, the past is no more fixed than the future. (Specifying “the past also raises problems with Relativity.)  The better view, Im persuaded, is that past is fixed, if it is, because its independent of our actions. Not the other way around.

What he doesn’t say, unless I missed it, is why we should think God’s knowledge of our actions is dependent on our actions.  (Perhaps that’s assumed in this literature, I’m very much not a specialist here.) The best reason I can think of is that God is not a Laplacian demon. He’s not taking a snapshot of the world and running a full-scale physics simulation to predict the future. He’s eternal!  He can simply observe the future directly, as easily as he can the present. No need for all that modelling work. So his knowledge of my actions depends on them in the same way ordinary after-the-fact knowledge does. And he can know about them whether or not my actions are causally determined or otherwise fixed by the past.


The uses of ABMs

Agent-Based Models of Scientific Interaction—Daniel Frey and Dunja Šešelja

Brit. J. Phil. Sci. 71 (2020), 1411–1437


This is a good illustration of the virtues and vices of toy models.

Kevin Zollman's agent-based models of scientific networks indicate, counterintuitively, that better-connected groups of scientists can do worse at converging on true theories. The explanation appears to be that when initial results favour a false theory, the news spreads quickly around the well-connected network, and creates an inertia which the true theory has to struggle a bit to overcome.

F&S add features to Zollman's model which change that outcome. In particular they add a “rational inertia” term whereby scientists don’t shift their preferred theory until the other one has demonstrated superiority over an extended time period. (They also add “criticism”, whereby reports of successful results in the opposing camp prompt re-examination of one’s own views, and dynamic success, whereby theory-testing experiments improve their own reliability over time; these improve performance but don’t change any signs.) 

With rational inertia added (along with the other features), the better-connected group does better. The interpretation seems to be that rational inertia slows the spread of any initial misleading reports—just as a lower number of connections would.

So: ABMs in a microcosm. Zollman demonstrates a suprising emergent phenomenon. F&S show that (and what) tweaks make it go away, and so give an idea of what causes it. Both have (on reflection) intuitive real-world parallels/interpretations.

Does any of this accurately reflect what scientists do? Quantitatively no, it is all bizarre nonsense. There are no “rounds” of “pulls” from “bandits”; a prediction that scientific communities will improve performance if rational inertia lasts 10 rounds but not 5, say, is utterly empty.  Nonetheless these toy models can provide valuable proofs of concept: that fewer or slower lines of communication in a network can give the truth time to get its boots on.

Classes, Why and How

Thomas Schindler, (2019) Philosophical Studies 176:4-7-435. 

Thomas (who I know from when he was a JRF at Clare College) advances a type-free theory of classes. This is not my field, and I’m hardly in position to evaluate the techical success of his attempt—though it seems plausible enough to me! My interest was piqued because (a) Thomas, and (b) his approach is to limit the “range of significance” of predicates or propositional functions, such that attempting to evaluate them at the excluded points (“singularities”) is impermissible, and yields no outcome. If one treats the problem cases—the set of all sets that aren’t members of themselves, for example—as such singularities, then one thereby avoids the paradoxical outcomes. (From one angle, this is Russell’s approach too, but abandons Russell's constraint that ranges of significance must be clumped into types.)  Thomas is inspired, he says, by some remarks of Gödel, but this approach also resembles my own favorite response to the semantic paradoxes, cassatio

Cassatio treats the paradoxical sentences as semantically improper.  The Liar, for example, has no truth value, because it fails to express a proposition and thus has no truth conditions. Crucially, cassatio does not ascribe some third, non-classical truth value (such as µ)—an approach which only invites so-called revenge paradoxes. (µ is not T, so “this sentence is not true” generates a paradox even if you try to assign it µ.) Rather, on cassatio, the Liar has no truth value at all, because it doesn’t assert anything. Calling it true and calling it false are both wrong, but non-paradoxical, in much the same way calling the planet Mars “true” or “false" would be.

As I say, I think Thomas’s approach is plausible; and I can’t evaluate its technial success. But from the point of view of a cassatio fan, I have two questions/suggestions: 

— Why are the singular points singular? A successful treatment of the paradoxes, it seems to me, needs to explain why things go wrong there, and not merely exclude them arbitrarily for convenience. (Though if we are arbitrarily excluding things, excluding fewer is better—on which count Thomas’s approach, if consistent, is better than Russell’s, for excluding only the paradoxical cases.)

Cassatio excludes the Liar on the basis that it viciously delegates its truth conditions to itself. If McX says “P” and I say “what McX says is wrong”, then there is no problem: I’ve delegated my truth conditions to McX’s utterance. But with the Liar, we follow the reflexive arrow of delegation around in a circle forever, never getting to a genuine assertion. 

I suspect that a similar approach can explain why the Russell set is problematic: the intension S = {x|x∉x}, when applied to S itself, circularly delegates S’s inclusion conditions to itself. So perhaps here cassatio can supplement Thomas’s approach.

—What about the Truthteller? An intuitive way to motivate cassatio is to think about “This sentence is true”. There is no paradox there: it is true if it is true, and false if it is false. But which is it? There is no answer to that question, because the Truthteller doesn’t make any real claim. Cassatio treats the Truthteller as improper for the same reason as the Liar: it doesn’t, despite appearances, ultimately say anything.

The Truthteller’s set analogue is S = {x|x∈x}, applied to S. Is it a member of itself? It is if it is, and it isn’t if it isn’t, so no paradox; but which is it? Again you follow a reflexive arrow of delegation forever without getting to an actual inclusion condition. 

What does Thomas’s approach have to say about that case? I would assume it is ripe for the same treatment, so that both problem cases are excluded. This is slightly more restrictive than excluding only the paradoxical cases, but still much less so than Russell’s.

Anyway, fun!

Decoding the Brain

Ritchie, Kaplan & Klein (2019) 70 BJPS 581-607


The subtitle of this paper is “Neural Representation and the Limits of Multivariate Pattern Analysis in Cognitive Neuroscience”, but I read it anyway. Its actually a very friendly paper which they could have called “MPVA decodability doesn’t entail explicit representation”.

The background is what they call the “decoder’s dictum”: that MVPA decoding of  neural activity tells you what information the decoded patterns represent. The paper argues that this is false, for two basic reasons. 

One is that MVPA decoding can pick up inessential correlations between the activity and the represented information. Those correlations might be in the stimulus—you can tell whether the subject is looking at up-right or up-left orientated lines, but perhaps you are picking up the output of edge detectors and things will fall apart if you make your stimulus patches square rather than round. Or they might be introduced by biases in the measurement procedure etc.

The second is that recoverability of information doesnt mean its represented in a usable form. Visual input, for example, is fully informationally encoded in early processing (V1)—if it wasnt, later stages would have to make it up. But its not usable in that form, its merely latent. Processing it into functionally available form is what later stages do. 

The authors note that correlation with behaviour helps nail down usable content, but even that isnt perfectly reliable. They suggest some furhter approaches which seem plausible enough to me.  These involve connecting the neurological analysis with psychologically derived similarity relationships among the stimuli, or in the other direction with behaviour. This gladdens my pluralist heart; it doesnt allow you to assign content based on neuroscientific data alone, but that is as it should be.

https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/