1 / 29

91.304 Foundations of (Theoretical) Computer Science

91.304 Foundations of (Theoretical) Computer Science. Chapter 3 Lecture Notes (Section 3.1: Turing Machines) David Martin dm@cs.uml.edu With some modifications by Prof. Karen Daniels, Fall 2009.

nimrod
Download Presentation

91.304 Foundations of (Theoretical) Computer Science

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 91.304 Foundations of (Theoretical) Computer Science Chapter 3 Lecture Notes (Section 3.1: Turing Machines) David Martin dm@cs.uml.edu With some modifications by Prof. Karen Daniels, Fall 2009 This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

  2. “Manners are not taught in lessons,” said Alice. “Lessons teach you to do sums, and things of that sort.” “And you do Addition?” the White Queen asked. “What's one and one and one and one and one and one and one and one and one and one?” “I don't know,” said Alice. “I lost count.” “She can't do Addition,” the Red Queen interrupted. Excerpt: Through the Looking Glass, Lewis Carroll

  3. Turing machine syntax • Definition A Turing Machine is an automaton M=(Q,,,,q0,qacc,qrej) where • Q is a finite set of states •  is an input alphabet that does not include " t ", the special blank character •  is a tape alphabet satisfying • t2 • µ • :Q£! Q££{L,R} is the transition function • q0 is the initial state • qacc is the single accepting state • qrej is the single rejecting state

  4. Differences in input mechanism • A TM has a "tape head" that points to exactly one cell on its tape, which extends infinitely to the right • At each transition, the TM looks at the current state and the current cell, and decides what new state to move to, what to write on the current cell, and whether to move one cell to the left or one cell to the right • Hence the transition function :Q£! Q££{L,R} • Each tape cell initially contains the blank character t • Our previous automata (DFAs, NFAs, PDAs) all had a separate read-only input stream • But in a TM, the input is given all at once and just written onto the left end of the tape — overwriting the blanks there a | b | a | b | a | b | t | t | t | t | t | t | t |  in state q7

  5. Turing machine computation • As with PDAs, we define a set of instantaneous descriptions (IDs) and then show what memory-state snapshots may follow each other, according to the program M. • First, the snapshots: in a TM, ID(M) = * Q * • Each element of this set represents the entire tape contents, the current state, and the location of the tape head • In example below, the ID is ab q7a babtt • So the character to the right of the state name is the "current" character • The tape always has infinitely many blanks on the right; we can write them or omit them as we please a | b | a | b | a | b | t | t | t | t | t | t | t |  in state q7

  6. Turing machine computation • Two IDs are related to each other (by `) if one can lead to the other according to the  function • So we look at all of the things that  can say, starting with right moves: • Suppose (q,b) = (t,c,R) where • R means "right move" • q2 Q - {qacc, qrej} and b 2 (states in green) • t2 Q and c 2 • Then uqb v`u ctv where u,v2* are undisturbed, the state has changed from q to t, the tape cell has changed from b to c, and the head has moved one character to the right (over the now-changed character)

  7. Turing machine computation • Left moves • Suppose (q,b) = (t,c,L) where • q2 Q - {qacc, qrej} and b 2 (states in green) • t2 Q and c 2 • Then uaqb v`utac v where u,v2* and a2 are undisturbed, the state has changed from q to t, the tape cell has changed from b to c, and the head has moved one character to the left • This says that one ID can lead to another ID when  says to move left and there is a character a2 to the left. What if there is no such character?

  8. Turing machine computation • Left moves at left edge of tape • Suppose (q,b) = (t,c,L) where • q2 Q - {qacc, qrej} and b 2 (states in green) • t2 Q and c 2 • Then qb v`tc v where v2* is undisturbed, the state has changed from q to t, the tape cell has changed from b to c • Where does this put the tape head in this case? • Note we have not explicitly covered the case where (q,b) = (t,c,L) and q2{qacc,qrej} • Or when we move R instead of L • Conclusion: well, if the current ID is u qb vand q2{qacc,qrej}, then no "next ID" is possible. We say that the TM halts

  9. Some Ways to Describe Turing Machine Computation • Implementation-level description • Instantaneous descriptions (IDs) specifying snapshots of tape and read-write head position as computation progresses. • Formal description (7-tuple) • Detailed state diagram. • We’ll discuss all 4 ways using Turing machine M1 in textbook (p. 138, 139, 145) for language: • We’ll also discuss Turing machine M2 in textbook (p. 143, 144) for language:

  10. Implementation-Level Description

  11. Instantaneous Descriptions (Snapshots) Sample Input: 011000#011000

  12. Formal Description and Detailed State Diagram 8

  13. Detailed State Diagram

  14. Implementation-Level Description

  15. Formal Description (7-tuple)

  16. Instantaneous Descriptions (Snapshots) Sample Input: 0000

  17. Detailed State Diagram

  18. Language recognized by TM • Finally, we let `* be the transitive, reflexive closure of `. So if  and  are IDs, the statement  `*  means "the TM can go from  to  in 0 or more steps" • The language recognized by M isL(M) = { x2* | q0 x `* u qacc v for some u,v2* } • Translation? • Note x 2 *, not *

  19. TM language classes • Definition A language L is Turing-recognizable if there exists a TM M such that L = L(M). • Synonym: L is recursively enumerable, abbreviated "r.e." • Definition The class of all Turing-recognizable languages is 1 = { L µ* | L is Turing-recognizable } • The textbook does not assign a name like this; it just says "class of TM-recognizable langs" • Beware: The class 1 is not an alphabet like  • The naming is unfortunate but better than some of the alternatives

  20. Deciders • We've seen that when you start a TM with an input x, it can do three distinct things: • Accept x • Reject x • Run forever without accepting or rejecting x • We call this "looping" -- meaning that the TM runs forever. (The "loop" might not be so simple, the point is it runs forever.) • Some TMs always accept or reject and never loop on any input whatsoever. You could easily write an example of one. A TM with this property is called a decider. • A decider always halts on every input

  21. Decidable languages • Definition A language is decidable if there exists a decider TM M such that L = L(M) • Synonyms: L is "computable" and "recursive" • It is in general not easy to tell if a language is decidable or not • Definition The class of all Turing-decidable languages is 0 = { L µ* | L is Turing-decidable} • Note 0 (decidable) versus 1 (recognizable) versus  (alphabet)

  22. Decidable versus recognizable • Fact (obvious) 0µ1 • Every decider is automatically a recognizer too • Fact (not at all obvious) 01 • This means that there exists some language H 21 - 0 • H is a language that can be recognized by some TM, but can't be recognized by any TM that always halts! • Fact (not at all obvious) 1 ALL • This means that there exists some language H22 ALL - 1 • H2 is a language that can't even be recognized by any TM

  23. Ultimately ALL 1 CFPP RPP 0 CFL REG FIN Each point is a language in this Venn diagram

  24. Reminder • The decidable languages: 0 • The recognizable languages: 1

  25. Closure properties of 0 and 1 • 1 is closed under [, Å, ¢, ¤, reversal • Proofs for [ and Å are similar to the NFA constructions we used, if you use a 2-tape TM • Proof for reversal is also easy with a 2-tape TM • ¢ and ¤ are somewhat harder • 0 is closed under all of these operations and complement as well

  26. 0 closed under complement Proof Suppose L20. Then L=L(M1) for some decider M1. We want to show there's a decider M2 such that L(M2) = Lc, whence (!) Lc20. This is easy: we just set M2 to be M1 with qacc and qrej swapped. This works because M1 never loops (forever) on any input; it always reaches either qacc or qrej, and so M2 does the opposite thing from M1 in terms of accepting or rejecting the string.

  27. 2 Proof does not work for 1 • Without the guarantee that M1 never loops on any input, this conversion does not produce an M2 such that L(M2) = L(M1) c. • Because if x L(M1), we want x2 L(M2), but...

  28. Preview: 1 is not closed under complement • The previous discussion just showed that the attempted proof of closure under complement failed • It didn't show that 1 is not closed under complement • However it is in fact true that 1 is not closed under complement

  29. Preview: a non-recognizable L • This all means that some L exists that is not recognized by any TM • What does it look like? • Is it important? YES, because of Church-Turing Thesis • Intuitive notion of algorithms = Turing machine algorithms • To be defined and discussed in Section 3.3

More Related