Incompleteness Suppose L is a logic and H(T,x) is a statement in L expressing that Turing machine T halts on input x. Thus H(T,x) is true if and only if T halts on input x. Recall -- L is sound and effective. So: If H(T,x) is provable in L then it is true so T halts on input x.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Suppose L is a logic and H(T,x) is a statement in L expressing that Turing machine T halts on input x.
There is a Turing machine M such that M halts on input i if and only if H(Ti,i) is provable in L.
Proof: If Tj halts on input j then by definition of M, H(Tj,j) is provable in L. Thus Tj loops on input j. Contradiction.
What does “restricted HL” need to derive to obtain the contradiction?
Thus no sufficiently powerful logic L can derive the statement “(X is provable in L) implies X.”
If L is a logic such that L can derive the statement “(X is provable in L) implies X” then either L is inconsistent (can derive a contradiction) or L cannot be represented by a Turing machine (L is not effective).
Then in L G(L) one can derive H(Tj,j). But one cannot derive G(L G(L))!
We can advance farther and farther but there is always an infinite distance between us and ultimate understanding.
The process of strengthening Peano arithmetic using Godel’s theorem can be made more formal:
Make HL’ into a logic -- say one can prove formula H(T,x) if for some logic L, HL’ L and H(T,x) can be proved in L.
The essence of consciousness -- awareness of ourselves. The ability to view our own thought processes as objects “from the outside.” To infer HL’ HL’.
Note that such sensing devices would be able not only to observe the machine M but also the sensing devices themselves.
For every machine there is a truth which it cannot produce as being true, but which a mind can. This shows that a machine cannot be a complete and adequate model of the mind. It cannot do everything that a mind can do, since however much it can do, there is always something which it cannot do, and a mind can.
The paradoxes of consciousness arise because a conscious being can be aware of itself, as well as of other things, and yet cannot really be construed as being divisible into parts. It means that a conscious being can deal with Goedelian questions in a way in which a machine cannot, because a conscious being can both consider itself and its performance and yet not be other than that which did the performance.
A machine can be made in a manner of speaking to "consider" its own performance, but it cannot take this "into account" without thereby becoming a different machine, namely the old machine with a "new part" added. But it is inherent in our idea of a conscious mind that it can reflect upon itself and criticize its own performances, and no extra part is required to do this: it is already complete, and has no Achilles' heel.
We can even begin to see how there could be room for morality, without its being necessary to abolish or even to circumscribe the province of science. Our argument has set no limits to scientific enquiry: it will still be possible to investigate the working of the brain. It will still be possible to produce mechanical models of the mind.
Only, now we can see that no mechanical model will be completely adequate, nor any explanations in purely mechanist terms. We can produce models and explanations, and they will be illuminating: but, however far they go, there will always remain more to be said. There is no arbitrary bound to scientific enquiry: but no scientific enquiry can ever exhaust the infinite variety of the human mind.12
In recent years I have been less zealous to defend myself, and often miss articles altogether. There may be some new decisive objection I have altogether overlooked. But the objections I have come across so far seem far from decisive.
However, Marvin Minsky has reported that Kurt Gödel told him personally that he believed that human beings had an intuitive, not just computational, way of arriving at truth and that therefore his theorem did not limit what can be known to be true by humans.
Why do I believe that consciousness involves noncomputable ingredients? The reason is Gödel's theorem. I sat in on a course when I was a research student at Cambridge, given by a logician who made the point about Gödel's theorem that the very way in which you show the formal unprovability of a certain proposition also exhibits the fact that it's true. I'd vaguely heard about Gödel's theorem — that you can produce statements that you can't prove using any system of rules you've laid down ahead of time.
But what was now being made clear to me was that as long as you believe in the rules you're using in the first place, then you must also believe in the truth of this proposition whose truth lies beyond those rules. This makes it clear that mathematical understanding is something you can't formulate in terms of rules. That's the view which, much later, I strongly put forward in my book The Emperor's New Mind.
In The Emperor's New Mind [Penrose, 1989] and especially in Shadows of the Mind [Penrose, 1994], Roger Penrose argues against the “strong artificial intelligence thesis," contending that human reasoning cannot be captured by an artificial intellect because humans detect nontermination of programs in cases where digital machines do not. Penrose thus adapts the similar argumentation of Lucas  which was based on Goedel's incompleteness results to one based instead on the undecidability of the halting problem, as shown by Turing . Penrose's conclusions have been roundly critiqued, for example, in [Avron, 1998; Chalmers, 1995a; LaForte et al., 1998; Putnam, 1995].
1. Collect all current sound human knowledge about non-termination.
2. Reduce said knowledge to a computer program.
3. Create a self-referential version of said program.
4. Derive a contradiction.
The conclusion (by reductio ad absurdum) is that the second step is invalid: A program cannot incorporate everything humans know!
(The reasoning is that humans can know that a self-referential version of this program does not halt, but the computer program cannot know this.)
Ramanujan had several extraordinary characteristics which set him apart from the majority of mathematicians. One was his lack of rigor. Very often he would simply state a result which, he would insist, had just come to him from a vague, intuitive source, far out of the realm of conscious probing. In fact, he often said that the goddess Namagiri inspired him in his dreams. This happened time and again, and what made it all the more mystifying -- perhaps even imbuing it with a certain mystical quality -- was the fact that many of his “intuition theorems” were wrong.
Ramanujan: What problem, tell me?
I read out the question from the Strand Magazine.
Ramanujan: Please take down the solution. (He dictated a continued fraction.)
The first term was the solution I had obtained. Each successive term represented successive solutions for the same type of [problem] as the number of houses in the street would increase indefinitely. I was amazed.
Mahalanobis: Did you get the solution in a flash?
Ramanujan: Immediately I heard the problem, it was clear that that solution was obviously a continued fraction. I then thought, “Which continued fraction?” and the answer came to my mind. It was as simple as this.
Johann Martin Zacharias Dase, who lived from 1824 to 1861 and was employed by various European governments to perform computations, is an outstanding example. He not only could multiply two number each of 100 digits in his head; he also had an uncanny sense of quantity. That is, he could just “tell”, without counting, how many sheep were in a field, or words in a sentence, and so forth, up to about 30 … . Incidentally, Dase was not an idiot.
In 1973, the Whitehead problem in group theory was shown to be undecidable in standard set theory. In 1977, Kirby, Paris and Harrington proved that a statement in combinatorics, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of set theory. Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. Goodstein's theorem is a relatively simple statement about natural numbers that is undecidable in Peano arithmetic.
GregoryChaitin produced undecidable statements in algorithmic information theory and in fact proved his own incompleteness theorem in that setting.
Suppose we fix a particular consistent axiomatic system for the natural numbers, say Peano's axioms. Then there exists a constant L (which only depends on the particular axiomatic system and the choice of definition of complexity) such that there is no string s for which the statement
There is no formula A such that A(x) is true if and only if x is a standard integer.
In fact in any formal system there are models of the real numbers that have the same size as the integers
Gödel's theorem thus shows that there must always exist such unusual, unintended interpretations of the system; as Henkin says, quoted in [Turquette 50]:
Similarly, Polanyi says, though only in connection with the second theorem:
Applied to minds, it would translate to some principled limitation of the reflexive cognitive abilities of the subject: certain truths about oneself must remain unrecognized if the self-image is to remain consistent [Hofstadter 79, p. 696].