in defence of a dogmatist n.
Skip this Video
Loading SlideShow in 5 Seconds..
In Defence of a Dogmatist PowerPoint Presentation
Download Presentation
In Defence of a Dogmatist

Loading in 2 Seconds...

play fullscreen
1 / 65

In Defence of a Dogmatist - PowerPoint PPT Presentation

  • Uploaded on

In Defence of a Dogmatist. Brian Weatherson May 2006. What is Dogmatism?. We’ll be primarily concerned here with dogmatism about knowledge , though we’ll also discuss dogmatism about justification

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'In Defence of a Dogmatist' - renata

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
in defence of a dogmatist

In Defence of a Dogmatist

Brian Weatherson

May 2006

what is dogmatism
What is Dogmatism?
  • We’ll be primarily concerned here with dogmatism about knowledge, though we’ll also discuss dogmatism about justification
  • For some propositions p and evidence E, you can know p on the basis of E without having independent or antecedent justification for E p
what is dogmatism1
What is Dogmatism?
  • I’m using evidence here in a very broad sense, so it includes anything that justifies beliefs, whether or not we are aware of it, or able to appeal to it in conscious reasoning
  • We’ll often be interested in more restricted, hence stronger, dogmatist claims that say this is true for some p, E in a particular area (perception, induction)
what is dogmatism2
What is Dogmatism?
  • I’m not going to say much about independent or antecedent justification
  • But I will insist that when E is all of our empirical evidence, a belief in q is justified independently or antecdently of E iff q is justified, and knowable, a priori
  • So the dogmatist has an expansionary view of actual justification, and a restrictive view of a priori justification
the sceptical argument
The Sceptical Argument
  • If I know I have hands, then I’m in a position to know that I’m not a handless brain in a perfectly functioning vat (BIPV)
  • I’m not in a position to know that I’m not a handless BIPV
  • So I don’t know I have hands
the sceptical argument1
The Sceptical Argument
  • Not the strongest sceptical argument
  • Premise Two seems just wrong to me
  • I can deduce that it is wrong from the fact that I know lots of stuff that is incompatible with my being a BIPV
  • A stronger sceptical challenge: is this knowledge a priori or a posteriori?
the sceptical argument2
The Sceptical Argument
  • If I know I have hands, then I’m in a position to know that I’m not a BIPV
  • I’m not in a position to know a priori that I’m not a handless BIPV
  • I’m not in a position to know a posteriori that I’m not a handless BIPV
  • So I don’t know I have hands
the sceptical argument3
The Sceptical Argument
  • I’m assuming here that any knowledge which is not a priori is a posteriori
  • Both these technical terms are vague, but I insist this is a penumbral connection between them
  • So this argument is valid, and it is far from obvious which premise is wrong
  • The sceptic has an argument for 2 and 3
the sceptical argument4
The Sceptical Argument
  • To introduce it, we need one more technical term
  • Say a HBIPVE is a handless brain in a perfectly functioning vat with the evidence I actually have
  • This seems to be the character that is meant to worry me in normal sceptical arguments: how do I know I’m not him?
If I know I have hands, then I can know that I’m not a HBIPVE, and if I can know (a priori) that I have E  I’m not a HBIPVE, I can know (a priori) that I’m not a HBIPVE

It could have turned out that I’m a HBIPV

If it could have turned out that I’m a HBIPV, then it could have turned out that I’m a HBIPVE

If it could have turned out that I’m a HBIPVE, then I can’t know a priori that I’m not a HBIPVE

If I can know a posteriori on the basis of my evidence E that I’m not a HBIPVE, then I can know a priori that I have E  I’m not a HBIPVE

So I can’t know that I have hands

the sceptical argument5
The Sceptical Argument
  • I think this is a fairly interesting argument because epistemologists divide up so widely on what is wrong with it
  • As we’ll see, versions of this argument to alternative sceptical conclusions are even more dramatic in this respect
  • We could divide the premises even further, but we won’t today
premise one closure
Premise One: Closure
  • Our argument needs a conjunctive closure premise, but anyone who accepts the original closure premise should like this
  • I don’t have anything to add to the debates over closure, so I’ll just note this argument
premise two possibility
Premise Two: Possibility
  • This is denied by the so-called ‘semantic’ response to scepticism
  • It has some plausibility in the case of external world scepticism
  • As we’ll see, however, it doesn’t seem to generalise to other sceptical arguments
premise three evidence
Premise Three: Evidence
  • This is denied by externalists about perceptions
  • I can see that I have hands, so I have different evidence to a HBIPVE
  • Williamson denies this premise in all sceptical arguments
  • This denial seems too strong
premise four empiricism
Premise Four: Empiricism
  • In earlier work, I said that a rationalist response to scepticism is one that says we can know a priori something that could have turned out to be false
  • This is not the same as saying that we can know a priori some contingent things
  • Some necessary truths could have turned out to be false, e.g. water is molecular
premise four empiricism1
Premise Four: Empiricism
  • More importantly, some contingent truths could not have turned out to be false
  • For example, that water is watery
  • Things that could not have turned out false are ‘deeply necessary’ to use Evans’s term
  • I borrow the phrasing ‘could have turned out’ from Stephen Yablo
  • By inclination, I’m a rationalist
on to dogmatism
On to Dogmatism
  • What is interesting about the dogmatist position, as I see it, is that the dogmatist says that premise five is the false one
  • The dogmatist thinks that the sceptical possibilities are possible
  • And she thinks that sometimes knowledge outruns evidence
  • And, most importantly, she’s empiricist
on to dogmatism1
On to Dogmatism
  • The primary argument for dogmatism, as I see it, is that for some sceptical arguments, the other premises are true
  • But the sceptical conclusion is crazy
  • Hence dogmatism!
  • This requires rejecting a priori knowledge (and justification) of things that could have turned out false
on to dogmatism2
On to Dogmatism
  • There is a long tradition, especially in Anglophone philosophy, of being very sceptical of the a priori
  • That would include being sceptical of claims to know we’re not HBIPVE a priori
  • I think any good argument against dogmatism should respect this attitude
  • Personally, I reject dogmatism because I simply lack the attitude
on to dogmatism3
On to Dogmatism
  • If we are to argue for dogmatism this way, we need a sceptical argument whose other premises are plausible
  • External world scepticism doesn’t help
  • The premise about evidence is rather implausible I think
  • The inductive sceptic argument is better
  • Roger White is, I think, first to explicitly discuss dogmatism about induction
inductive scepticism
Inductive Scepticism
  • p is some proposition that I can know inductively, e.g. that it will be hotter in Austin than Anchorage next year
  • E is my evidence, and E* is the proposition that my evidence is E
  • hT is the history of the world to date
  • So ~p & hT & E* is a sceptical possibility, where the history of the world and my evidence is the same, but p is false
If I know p, then I can know ~(~p & hT & E*), and if I can know a priori that E*  ~(~p & hT & E*), then I can know a priori that ~(~p & hT & E*).

It could have turned out that ~p & hT

If it could have turned out that ~p & hT, then it could have turned out that ~p & hT & E*

If it could have turned out that ~p & hT & E*, then I can’t know a priori that ~(~p & hT & E*)

If I can know a posteriori on the basis of my evidence E that ~(~p & hT & E*), then I can know a priori that E*  ~(~p & hT & E*)

So I can’t know that p

inductive scepticism1
Inductive Scepticism
  • The first two premises seem very plausible to me, but I don’t have anything new to say about them
  • What makes the inductive sceptic interesting is that premise 3 is plausible
  • Williamson denies it because of E=K
  • But it follows from the (very plausible) principle that evidence supervenes on causal history, so I’ll accept it
inductive scepticism2
Inductive Scepticism
  • So three premises look good
  • But the conclusion is untenable
  • So it’s down to rationalism or dogmatism
  • That’s an interesting conclusion already!
  • If I had to pick, I’d pick rationalism
  • But I don’t think the existing arguments against dogmatism are that good
  • I’m going to discuss three arguments
  • Some of these are primarily arguments against dogmatism about justification
  • But they also seem to tell against dogmatism about knowledge
  • One, the bootstrapping argument, is primarily an argument against perceptual dogmatism
  • Roger White discusses the following kind of objection (slightly modified)
  • The dogmatist says I can’t know, or even justifiably believe, a priori that~(~p & hT & E*)
  • But a priori I can run a dominance argument that has as its conclusion ~(~p & hT & E*)
  • Either I will get evidence E or I won’t
  • If I do, then I’ll be justified in believing p
  • I’ll then be able to infer ~(~p & hT & E*)
  • If I don’t, then I won’t even have E so I’ll be able to infer ~(~p & hT & E*)
  • So I can know a priori that either way I’ll be justified in believing ~(~p & hT & E*)
  • So I’m justified a priori in believing this
  • The problem with this argument is that it slides between facts about evidence I have, and facts about what I’m justified in believing about my evidence
  • It is possible that I’ll not have evidence E, but not be justified in believing that I don’t have evidence E, let alone knowing this
  • So the second half of the disjunctive argument seems to fail
  • This is primarily an argument against a certain kind of perceptual dogmatism
  • In Pryor’s version, the dogmatist endorses the following principle
  • If it appears to the agent that p, then in the absence of defeaters, she is justified in believing that p, even if she isn’t justified in advance in believing that appearances are reliable
  • This seems to run into a version of Stewart Cohen’s easy knowledge argument
  • Cohen is primarily concerned with attacking reliabilism
  • But the objection, if sound, seems to tell against the dogmatist as well
Imagine my 7 year old son asking me if my color-vision is reliable. I say, “Let’s check it out.” I set up a slide show in which the screen will change colors every few seconds. I observe, “That screen is red and I believe that it is red. Got it right that time. Now it’s blue and, look at that, I believe it’s blue. Two for two …” I trust that no one thinks that whereas I previously did not have any evidence for the reliability of my color vision, I am now actually acquiring evidence for the reliability of my color vision. But if Reliabilism were true, that’s exactly what my situation would be.
  • The argument is directly targeted at the dogmatist about perception
  • But we can imagine a similar argument against the inductive dogmatist
  • I sit down and make a number of inductive inferences
  • I then conclude, from the truth of those conclusions, that I’m a reliable inductor
  • This reasoning is bad
  • Some say dogmatists must endorse it
  • But in fact there are a number of reasons to reject it even given dogmatism
  • Worries about projection
  • Worries about randomness of sample
  • Worries about ‘radiance’
  • Radiance and randomness
  • The bad inference has the form, all Fs so far have been Gs, so generally Fs are Gs
  • But perhaps being an F isn’t a natural kind
  • I shouldn’t, for instance, conclude from the reliability of my visual perception that I have an accurate sense of smell
  • So the argument only works if I’m concluding that a single method is reliable
  • Arguably, I need to know that I’m evaluating a single method
  • If for all I know, I use many perceptual, even visual perceptual, and inductive methods, this is a bad inference
  • Since I can’t tell from the armchair what is a single method, I can’t learn about my own reliability from the armchair
  • The dogmatist only says that we can conclude that things are as they appear in cases where there aren’t defeaters
  • That suggests that the cases where we can infer from appearances to reality won’t be a random sample of all cases
  • But we need a random sample to do the enumerative inductive inference being considered
  • Those two responses are far from telling
  • I think the big worry with the argument is that it makes a radiance assumption
  • The idea of radiance is based on Williamson’s discussion of luminosity
  • A property is luminous iff whenever it is instantiated, we are in a position to know that it is instantiated
  • A property is radiant iff whenever it is instantiated, we can justifiably believe that it is instantiated
  • Williamson argues against luminosity
  • His arguments don’t carry over to arguments against radiance
  • That’s because the arguments centrally appeal to the factivity of knowledge
  • Still, there is good reason to doubt in many cases that properties are radiant
  • In particular, it is possible that we could have evidence, or appearances, or beliefs, without so much as being justified in believing we have them
  • Perhaps ideally our evidence, beliefs or appearances would be radiant, but we can’t assume things are ideal
  • If appearances aren’t radiant, then Cohen’s version of the argument fails
  • When I get a red appearance, I’m justified in believing I’m looking at something red
  • But it doesn’t follow that I’m justified in believing things are as they appear
  • Because I might not be justified in believing that the thing appears red
radiance randomness
Radiance & Randomness
  • It might be objected that usually appearances and beliefs are radiant
  • But unless they are always radiant, we won’t know the cases we know about form a random sample
  • Further, unless the property of not being appeared to redly is radiant, for all we know there may be many cases where we have an inaccurate appearance of red
radiance randomness1
Radiance & Randomness
  • We also get randomness problems if we aren’t always aware of our appearances
  • If sometimes I have a red appearance, but don’t believe I have one, then it might be that the cases where I believe I have a red appearance are a non-random part of the sample
  • So the argument needs a self-awareness assumption
radiance bootstrapping
Radiance & Bootstrapping
  • But the big problem isn’t that the few actual failures of radiance threaten the randomness of the sample
  • Rather, it is that the assumptions needed to get the anti-dogmatist argument going are inconsistent
  • These are that appearances are radiant, that we are aware of appearances and that bootstrapping arguments are bad
radiance bootstrapping1
Radiance & Bootstrapping
  • Assume that appearances are radiant
  • So if I’m appeared to redly, I believe that I’m appeared to redly, and this belief is justified
  • So I can reason as follows:I believe that’s a red appearanceAnd it is a red appearanceIntrospection works again!
radiance bootstrapping2
Radiance & Bootstrapping
  • Given radiance and self-awareness, I can do this over and over again
  • Eventually, I’ll have a large sample of cases where my introspective beliefs are accurate
  • So I can conclude on the basis of this little reflection that my introspective beliefs are generally accurate
radiance bootstrapping3
Radiance & Bootstrapping
  • But this is crazy
  • It takes serious psychology to know that my introspective beliefs are accurate
  • A little armchair reasoning like this won’t cut it
  • This reasoning is as bad as the reasoning that Cohen parodies
  • Indeed, it is just that reasoning, with a different target
radiance bootstrapping4
Radiance & Bootstrapping
  • If we really want to block all bootstrapping arguments, we have to reject not just dogmatism, but also the radiance and self-awareness assumptions
  • Otherwise an introspective bootstrapping argument will be licensed
  • But dogmatism without radiance and self-awareness doesn’t allow bootstrapping, so dogmatism can’t be faulted here
bayesian objection
Bayesian Objection
  • Our final argument comes from principles of Bayesian epistemology
  • The dogmatist thinks that when we get E, our credence in E  p can go up
  • Or at least, she thinks that we can go from not being justified in believing E  p to being justified in believing it
  • But this can’t happen in Bayesian models
bayesian objection1
Bayesian Objection
  • We’ll work through a simple version of this argument
  • Let A be the proposition it appears to me that there is a hand
  • H is the proposition there is a hand
  • F is the proposition it falsely appears to me that there is a hand, i.e. A H.
  • And Pr is our prior probability
Pr(A) < 1 premise
  • Pr(F) > 0 premise
  • Pr(F | A) ∙ Pr(A) = Pr(F  A) probability def’n
  • Pr(F  A) = Pr(F) from definition of F
  • Pr(F |A) ∙ Pr(A) = Pr(F) from 3, 4
  • Pr(F | A) > Pr(F) from 1, 2, 5
  • Pr(~F | A) = 1 - Pr(F | A) probability theorem
  • Pr(~F) = 1 - Pr(F) probability theorem
  • Pr(~F | A) < Pr(~F) from 6, 7, 8
bayesian objection2
Bayesian Objection
  • All that is mathematically sound
  • But it usually gets a philosophical gloss
  • The last line is read as saying that the probability of ~F goes down when we get evidence A
  • This relies on an undefended claim, namely that we should update credences by conditionalisation
bayesian objection3
Bayesian Objection
  • If we grant conditionalisation, things look bad for the dogmatist
  • The dogmatist says that we can know, and justifiably believe ~F when our evidence is A, but not before
  • But it seems plausible that evidence that justifies a belief shouldn’t make its credence go down
bayesian objection4
Bayesian Objection
  • Can the use of conditionalisation here be defended?
  • I think it can’t
  • Conditionalisation usually is the right approach to updating
  • But it isn’t always the right approach in cases where uncertainty is relevant
bayesian objection5
Bayesian Objection
  • Can the use of conditionalisation here be defended?
  • I think it can’t
  • Conditionalisation usually is the right approach to updating
  • But it isn’t always the right approach in cases where uncertainty is relevant
bayesian objection6
Bayesian Objection
  • Game plan for the last few slides
  • Distinguish risk from uncertainty
  • Describe a model for representing uncertainty
  • Describe a new approach to updating in that model, one that blocks this argument
  • Give a dogmatist defence of that approach
By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed. (Keynes 1937: 114-5)
representing uncertainty
Representing Uncertainty
  • Traditional Bayesian models didn’t have an easy way to represent uncertainty
  • There is a common way this is done
  • Instead of representing credal states by a single probability function, we represent them by sets of probability functions
  • Say S is the agent’s representor iff it is the set of functions representing her
representing uncertainty1
Representing Uncertainty
  • The idea is that more uncertain p is, the larger this set is {x: For some Pr in S, Pr(p) = x}
  • Now we can represent uncertainty in a distinct way from risk
updating uncertainty
Updating Uncertainty
  • This kind of model is widely used in the literature
  • It is usually assumed, without much argument, that we should update S on evidence E by conditionalisation
  • That is, the new representor is derived from the old one by conditionalising each element
updating uncertainty1
Updating Uncertainty
  • That is what I want to reject
  • I think the agent might sometimes use her new evidence to realise that she need not have been as uncertain as she was
  • So we’ll get from the old representor to the new one by first cutting some functions from the set, and second conditionalising the remaining members
updating uncertainty2
Updating Uncertainty
  • The benefit of this approach is that it lets us say something quite attractive about the a priori rational credal state
  • It is somewhat intuitively that a priori we should treat all possibilities symmetrically
  • It is notoriously hard to implement this intuition consistently in a traditional Bayesian theory
  • But we can if we allow for uncertainty
updating uncertainty3
Updating Uncertainty
  • The a priori rational credal state is represented by the set of all analytically coherent probability functions
  • A probability function is analytically coherent iff it assigns probability 0 to anything that could not turn out to be true and conforms to the Principal Principle
updating uncertainty4
Updating Uncertainty
  • If we accept the usual conditionalisation approach to updating, we cannot accept that this is a rational a priori state
  • That’s because updating this state by conditionalisation does not allow for ampliative learning
  • But if updating involves culling the set, then we can learn things that go beyond our evidence
updating uncertainty5
Updating Uncertainty
  • And dogmatists should be very pleased with this idea for representing the a priori
  • Our discussion of dogmatism started with the idea that we shouldn’t allow for a priori justification of substantive beliefs
  • Any initial credal state other than this one seems to adopt some substantive claims, at least about relative probability
updating uncertainty6
Updating Uncertainty
  • If when the agent sees that she appears to have hands, she culls as well as conditionalises her representor, it won’t follow that her credence in ~F will lower
  • So that evidence might ground her belief that ~F
  • In short: the Bayesian objection to dogmatism seems to show up a problem for the Bayesian, not the dogmatist