Agent probabilities & free will. Helen Beebee University of Birmingham. Hand-wavy introduction. Some features of the world that we care about don’t exist according to the ‘view from nowhere’: colours free will moral responsibility. But:
University of Birmingham
Some features of the world that we care about don’t exist according to the ‘view from nowhere’:
It would be an error to infer that we are are mistaken when we make positive claims about these things.
Such claims are made from our own perspective; that perspective can’t be eliminated without loss of meaning.
… to use (something like) Price’s notion of an ‘agent probability’ in an attempt to characterise this perspective, so that:
whether or not determinism is true.
Main issue (cf. David’s talk): Our actions aren’t independent of prior states. So how should we conceive of agent probabilities?
Answer: As an expression of the agent’s perspective (not as either degrees of belief or fictions).
Consequence Argument:Determinism is incompatible with the ability to do otherwise, and so incompatible with free will.
Ginet’s epistemological argument: If decisions were deterministically caused, it would be conceptually possible for S to know what she was going to decide before deciding. But it is conceptually impossible for someone to deliberate when they already know what the outcome would be. So decisions cannot be deterministically caused.
Coco should go ahead and eat the Mars Bar. But it looks like, for him, Pr(M/C)>Pr(M/~C), so EDT recommends that he desist. BAD. [2-3]
Need to find a way of screening off the correlation between C and PMS, to get the answer that Coco should eat the chocolate. How to do it?
In conceiving of himself as genuinely deliberating about what to do, Coco must regard himself as someone to whom the correlation between PMS and C does not apply.
The very fact that Coco (regards himself as being) free to do C or not, on the basis of deliberation about what to do, absents him from the reference class for whose members eating chocolate increases Pr(PMS) and so Pr(M). 
Gives a formal characterisation of how APs are to be assigned.
The ‘disinterested probability distribution’ (DPD) represents Coco’s degrees of belief concerning the actions/outcomes/background factors for his twin. [4/1]
The ‘fictional probability distribution’ (FPD) is as close to this as possible except that it makes Coco’s possible actions probabilistically independent of background factors.
Coco uses the FDP in deciding what to do. [4/2]
Price thinks Coco’s APs are ‘conditional credences’: there really is some reference class of which Coco is a member (so Coco believes) within which there is no correlation between PMS and C.
Hitchcock appears to disagree: Coco’s APs are not his ‘degrees of belief about [his] own actions, when assessed in a disinterested and non-self-deceptive way’. Those are given by the DPD, not the FPD.
Coco might well know that there is some DPD that really does describe his situation.
E.g. next time he’s thinking about chocolate, he’ll know that most people in his situation (people who want chocolate and are rational) will take the chocolate (because it’s the rational thing to do). But his FPD must differ from this or he won’t be able to conceive himself as deliberating. (Cf. David’s talk.)
So perhaps we should go with Hitchcock and think of APs as ‘fictional’?
‘Deliberation requires the assumption of a freedom of action that may, as a matter of fact, not exist’ (i.e. the FPD really is, or might be, fictional).
So a free decision would be one where the DPD (and not just the FPD) makes the possible actions probabilistically independent of background factors.
(a) That’s a very strong requirement on freedom (pretty much Ginet’s acausalism). 
(b) what justifies my ignoring something I know about myself (i.e. using the FPD and not the DPD)? How can it be rational to ignore a piece of relevant information?
Both Price and Hitchcock take the notion of agent probability to be bound up with the notion of freedom: to conceive of oneself as genuinely deliberating, and so genuinely having a choice between different options, one has to regard the outcome of deliberation as probabilistically independent of prior factors.
Price thinks this ‘regarding’ is a matter of having a straightforwardly true belief (problematic).
Hitchcock thinks the ‘regarding’ is a matter of self-deception, (also problematic).
Agree with Hitchcock that APs don’t correspond to a real reference class of which the agent is a member ...
… but disagree about what ‘free’ decision requires. APs are not ‘fictional’ but perspectival.
APs don’t correspond to degrees of belief or statistical correlations, but are a probabilistic expression of the agent’s perspective, i.e. her perspective as a deliberator.
I’m a two-boxer. I know this because I know that in a Newcomb situation I’d deliberate and do what I think is the rational thing, viz (so I believe), take both. Now put me in the Newcomb situation. What do I do?
It seems I can’t take myself to know I’ll take both, and simultaneously regard myself as genuinely deliberating.
(a) I don’t know what I’ll do (but I do know!), or
(b) I do know, in which case I can’t deliberate (but that knowledge is based on the assumption that I will, in fact, deliberate!)
Solution: Deliberation requires adopting the ‘deliberative perspective’, and this just IS the deployment of APs. In adopting the deliberative perspective I don’t deceive myself (I really am deliberating!). I legitimately set aside knowledge I really do have – legitimately because, again, that is part and parcel of the deliberative perspective.
APs deliver a sense in which different possible decisions are available to me.
To be able to do otherwise just IS to adopt the agent’s perspective, i.e. to adopt APs.
More than one decision is open to me, and open in a way that is unconstrained, from my perspective as a deliberator, by prior states. 
Unlike many compatibilist attempts to characterise ‘CHDO’, it isn’t completely ad hoc. We need APs anyway (if Pricean EDT is right) for decision-theoretic reasons, whether or not determinism is true. And that demands a story about why it’s OK to use them, given that in general FPDs and DPD come apart.
It locates free will where it should be: in the deliberation of agents about what to do. (This contrasts with libertarianism, which requires that free agents have the ability to behave like idiots. This is an odd power to want to have.)
… is invalid.
There is a sense of ‘CHDO’ that is incompatible with determinism. But this is not a sense of ‘CHDO’ that is legitimately applied by agents to their own deliberative situation: it is a sense of ‘CHDO’ that ignores the agent’s perspective.
If decisions were deterministically caused, it would be conceptually possible for S to know what she was going to decide before deciding. But it is conceptually impossible for S to deliberate when she already knows what the outcome would be. So decisions cannot be deterministically caused.
(Actually the argument works just as well against libertarianism, agent-causalism, etc., assuming either it’s CP that God knows what I’m going to do or that time travel is CP.)
Ginet’s argument turns on the claim that it’s conceptually impossible to decide to do something when you already know what your decision will be.
My story rejects this claim, while explaining why one might be inclined to believe it:
You can know what you’re going to decide, but deliberation, i.e. the adoption of the agent’s perspective, involves setting that knowledge aside. (You can’t deploy that knowledge in the process of deliberation.)