Minds and Machines. Summer 2011 Tuesday, 8/2.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
608. No supposition seems to me more natural than that there is no process in the brain correlated with associating or with thinking; so that it would be impossible to read off thought-processes from brain-processes. I mean this: if I talk or write there is, I assume, a system of impulses going out from my brain and correlated with my spoken or written thoughts. But why should the system continue further in the direction of the centre? Why should this order not proceed, so to speak, out of chaos? . . .
609. It is thus perfectly possible that certain psychological phenomena cannot be investigated physiologically, because physiologically nothing corresponds to them.
Properties of Connectionist Models:
Implementational vs. Cognitive connectionism.
Reasons to think that Connectionism is incompatible with Propositional Modularity.
Consider a connectionist network that’s trained to give “yes”/”no” answers to a set of 16 questions, e.g. “do dogs have fur?”, “do fish have fur?”, etc.
2. Consider what happens when we compare the original network with one that involves an additional piece of knowledge.
Intuitively, we want to say that the two networks share many of the same beliefs.
Yet the two nets may have no subset of weights in common! Their commonality is invisible at the level of units and weights. But if there were some discrete states in the connectionist model that played the role of beliefs, we would expect there to be lots of commonalities.
 and  suggest that there are no discrete, semantically evaluable and causally potent states in a connectionist network that could plausibly be identified with beliefs. So if connectionism is the correct model of the mind, we must deny that the mind really contains entities like beliefs.
Objection 1: Connectionist models do not really violate propositional modularity, since the propositions the system has learned are coded in functionally discrete ways, though this may not be obvious.
(Think of the way propositions are stored in a computer, at physically scattered memory addresses. Still, if one knew enough about the system, one could erase any one sentence by tampering with the contents of the relevant memory addresses.)
Reply: This is a possibility, but at present, there is no reason to take it seriously.
Objection 2: The propositions are encoded in the patterns of activation of the hidden units, when a given question is presented to the network.
Reply: Such patterns of activation are not enduring states of the network. So it’s implausible that they play the role of knowledge or beliefs.
Objection 3: Beliefs should not be identified with activation patterns (or any transient states) but with dispositions to produce activation patterns given certain inputs.
Reply: Dispositions are enduring states, but they are not the right sorts of enduring states to be identified with beliefs. In particular, they do not seem to be capable of playing a causal role.
Other responses to Stich (et al) are:
1. Question the commitment to propositional modularity (Dennett).
2. Accept that if (cognitive) connectionism is true, then Folk Psychology is false, but argue that this conditional is a reason to reject connectionist models of the mind rather than Folk Psychology.
Fodor would go with 2, but he also argues against connectionist models of (certain parts of) the mind like this:
C. Connectionist models are not good models of human thought.