The Postgridiot’s Guide to…. PHILOSOPHY OF SCIENCE. Platonism about Theorising. (Or, Why You Shouldn’t Care Too Much About Being ‘Scientific’). E Pluribus Unum?. The question: Is there a ‘right way’ of formulating theories? If so, what is that way?. Science as Paradigm.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
PHILOSOPHY OF SCIENCE
(Or, Why You Shouldn’t Care Too Much About Being ‘Scientific’)
The question: Is there a ‘right way’ of formulating theories?
If so, what is that way?
W. V. Quine: “It is part of the scientist’s business to generalize or extrapolate from sample data, and so arrive at laws covering more phenomena than have been checked”
Our question has presumed that there is a right way to do science; but what might that way be?
There are, to be sure, general “rules of thumb”, for specifying the marks of a merely good theory. Yet the question is whether these can be sharpened up into formal criteria whereby we infallibly generate new knowledge.
The original ideologues of the scientific method – and most especially Leibniz – indeed thought there were. But Leibniz and these ideologues were all philosophers, and philosophy would be in a poor state if we agreed on anything.
And the problem here lies in deciding how we ought to specify the criteria at all.
For the most part, however, these philosophers were all generativists: subscribing to the view that it is in principle possible to define a theory whereby one might be able to deduce the likelihood that a given conclusion would be true given a sufficient basis.
The idea was to construct a deductive logic for science in which all disputes could be resolved. The exact mechanism for this was to show how conclusions as to could be drawn from self-evident truths (clear-and-distinct ideas).
The progress of science soon threw Leibnizian confirmation theory into disrepute, however.
For one thing, it soon became apparent that the some of the most fundamental truths of science were really not clear-and-distinct after all (cf. Euclidian vs. non-Euclidian geometry)
But then there were the paradoxes this view ran into. Take Goodman’s for example.
Now let’s invent a new predicate, ‘grue’. And let the following be a definition for this new predicate:
x is grue if and only x is examined before time t and x is green or x is examined after time t and x is blue.
But now this leads to an unfortunate result: for it is now a logically valid conclusion that each examination of an emerald simultaneously confirms both the hypothesis that emeralds are green and the hypothesis that emeralds are grue.
The paradox gains its bite from the fact that we are in no position to distinguish between confirmation of a genuine predicate – green – and a manufactured one – grue.
And this is particularly bad if you think, as Leibniz did, that logic would show you how theories are confirmed – for it was logic that got us into this mess in the first place (or, more properly, the real culprit is the semantics of disjunction)
It was in order to avoid the shortcomings of confirmation theory that philosophers turned to the hypothetico-deductive model of theorising.
The new concept here is falsifiability by data: hypotheses are true if true predictions are generated from them, and false otherwise
The immediate problem is that the hypothetico-deductive model also assumes that confirmation is exhausted in instantiation of properties, and so is susceptible to Goodman’s paradox above.
But then there is more troubleshooting to be done: this time to do with the assumption that falsifiability is the only criterion by which we know a theory to be the right one.
Problems emerge when we ask what it is that is supposed to be falsified. At first glance, the answer seems quite obvious: theories. But then we have to ask how they are falsified, and this is where we run into problems.
Notice that the hypothetico-deductive method has to make the following claim: that theories are confirmed proportionate to their data. But this assumes a strong isomorphism between theory and data that we may well have reason to deny.
For instance, what if it were possible that theories were underdetermined by their data?
Notice that underdetermined does not necessarily mean unjustified, but rather that there are many possible, non-equivalent, theories each of which nevertheless seem to be implied by the same data.
Consider, for instance, the debate between proponents of the Minimalist Program and functional grammars like LFG.
For the most part, the data for the two rival theories is the same: both are kinds of generative grammar that are designed to give a general account of syntax on the basis of supposed universal laws.
But whereas MP (simplifying somewhat) seeks to explain those laws on the basis of supposed mental structures, LFG remains officially agnostic (and unofficially hostile) towards this kind of theory
If both the hypothetico-deductive method and confirmation theory have problems qua theories of the right way to go about being ‘scientific’ what’s the other option?
We need only say that the following are indicators (though not, to be sure, fool-proof ones) of a good theory
Thanks to Ricardo for access to his notes for Research Methods in Linguistics, and his upcoming talk “Linguistics as an Immature Science”
Carnap, R. The Logical Syntax of Language (Kegan Paul: 1937)
Look, B. “Gottfried Wilhelm Leibniz” Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/leibniz/
Peckhaus, V. “Leibniz’ Influence on 19th Century Logic” Stanford Encyclopediaof Philosophy http://plato.stanford.edu/entries/leibniz-logic-influence/
Psillos, S. and Curd, M. (eds.) The Routledge Companion to the Philosophy of Science (Routledge: 2008)
Quine, W. V. O. From a Logical Point of View (Harvard: 1954)
- Word and Object (Harvard: 1960)