Turing machines
Download
1 / 93

Turing Machines - PowerPoint PPT Presentation


Turing Machines. Motivation. Our main goal in this course is to analyze problems and categorize them according to their complexity . Motivation. We ask question such as “how much time it takes to compute something?”. Motivation.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentation

Turing Machines

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Turing Machines


Motivation

Our main goal in this course is to analyze problems and categorize them according to their complexity.


Motivation

We ask question such as “how much time it takes to compute something?”


Motivation

But in order to answer them we must first have a computational model to relate to.


Introduction

  • Objectives:

    • To introduce the computational model called “Turing Machine”.

  • Overview:

    • DeterministicTuring machines

    • Multi-tape Turing machines

    • Non-deterministic Turing machines

    • The Church-Turing thesis

    • Complexity classes as bounds on resources required by TMs.


Schematic of a Turing Machine

Read/Write head

There’s b here!

moves: left/right

a

infinite tape


SIP 128-129

Formal Definition of a TM

A deterministic Turing Machine is a tuple consisting of several objects.


Formal Definition of a TM

1. Q - a finite set of states.


Formal Definition of a TM

2.  - the input alphabet, finite set not containing the blank symbol _.

b

a

c

d


Formal Definition of a TM

3.  - the tape alphabet, where  and _.

B

_

A

b

a

C

c

d


Formal Definition of a TM

4. :QQ{L,R} - the transition function.

q0

q0

a


Formal Definition of a TM

5. q0 - the start state


Formal Definition of a TM

6. qacceptQ - the accept state.


Formal Definition of a TM

7. qrejectQ - the reject state. qrejectqaccept.


Formal Definition of a TM Summary

1. Q - the set of states.

2.  - the input alphabet.

3.  - the tape alphabet

4. :QQ{L,R} - the transition function.

5. q0- the start state.

6. qacceptQ - the accept state.

7. qrejectQ - the reject state.


ComputationsThe Start Configuration

start state

q0

head: on the leftmost square

the input: starting from left


ComputationsExample

(q0,a)=(q0,b,R)

q0

q0

b

Note: the head cannot move to the left of this square!


ComputationsAccepting Configuration

If the computation ever enters the accept state, it halts.

qaccept


Note: the machine may loop and not reach any of these two!

ComputationsRejecting Configuration

If the computation ever enters the reject state, it also halts.

qreject


The Language a TM Accepts

  • A Turing Machine accepts its input, if it reaches an accepting configuration.

  • The set of inputs it accepts is called its language.


the state

the content of the tape

the position of the head

Configurations

  • How many distinct configurations may a Turing machine which uses N cells have?

||N

N

|Q|


Examples:

Member of L:

aaabbbccc

Non-Member of L:

aaabbcccc

Building a TM for a Simple Language

L = { anbncn | n0 }


The Turing Machine

1. Q = {q0,q1,q2,q3,q4,qaccept,qreject}

2.  = {a,b,c}

3.  = {a,b,c,_,X,Y,Z}

4.  will be specified shortly.

5. q0 - the start state.

6. qacceptQ - the accept state.

7. qrejectQ - the reject state.


aa, R

bb, R

YY, R

ZZ, R

q1

bY,R

q2

aX,R

q0

cZ,L

XX, R

q3

bb, L aa, LYY, L

YY, R

qac

q4

ZZ, L

__, R

YY, R ZZ, R

The Transitions Function

aa, R

bb, R

YY, R

ZZ, R

q1

bY,R

transitions not specified here yield qreject

q2

aX,R

q0

cZ,L

XX, R

__, R

q3

bb, L aa, LYY, L

YY, R

qac

q4

ZZ, L

__, R

YY, R ZZ, R


aa, R YY, R

Demonstration

bb, R ZZ, R

q1

q1

q1

bY,R

aX,R

q2

q2

q2

q0

q0

q0

q0

q0

cZ,L

XX, R

__, R

q3

q3

q3

ZZ, L bb, L YY, L aa, L

YY, R

qac

qac

__, R

q4

q4

q4

YY, R ZZ, R

. . .

X

a

Y

b

Z

c

_

_


Equivalent Models

  • Deterministic Turing machines are extremely powerful.

  • We can simulate many other models by them and vice-versa with only a polynomial loss of efficiency.

  • Next we’ll see an example for such model.


SIP 136-138

Multi-Tape Turing Machines

The input is written on the first tape

. . .

. . .

. . .


Multi-Tape Turing Machines

1. Q - the set of states.

2.  - the input alphabet.

3.  - the tape alphabet

4. :QkQ({L,R})k- the transition function, where k (the number of tapes) is some constant.

5. q0- the start state.

6. qacceptQ - the accept state.

7. qrejectQ - the reject state.


Robustness

  • Multi-tape machines are polynomially equivalent to single-tape machines.

  • We can state a much stronger claim concerning the robustness of the Turing machine model:


The Church-Turing Thesis

Intuitive notion of algorithms

Turing machine algorithms


What’s Next?

  • We proceed with a less realistic computational model,

  • Which can be simulated by DTMs

  • However, with an exponential loss of efficiency.


Non-deterministic Turing Machines

1. Q - the set of states.

2.  - the input alphabet.

3.  - the tape alphabet

4. :QP(Q{L,R}) - the transition function.

5. q0- the start state.

6. qacceptQ - the accept state.

7. qrejectQ - the reject state.

power set P(A)={B | BA}


.

.

.

Computations

deterministic computation

non-deterministic computation tree

accepts if some branch reaches an accepting configuration

time

Note: the size of the tree is exponential in its height


Alternative Description

A non-deterministic machine always guesses correctly the ultimate choice.


Example

  • A non-deterministic TM which checks if two vertices are connected in a graph may simply guess a path between them.

  • Now it only needs to verify this is a valid path.


SIP 138-140

Simulating a Non-deterministic machine by a Deterministic One

  • We’ll describe a deterministic 3-tapes Turing machine which simulates a given non-deterministic machine.


Simulating a Non-deterministic machine by a Deterministic One

. . .

input tape

. . .

simulation tape

. . .

address tape


Addresses

1

non-deterministic computation

1

2

3

111

1

1

2

1

1312

1

1

2

1

2

1

1

1

2


Simulation

  • Write 111…1 on the address tape.

  • Copy the input to the simulation tape.

  • Simulate the NTM: use the choices dictated by the address tape (if valid).

  • If it accepted – accept.

  • Replace the address string with the lexicographically next string. If there is no such – reject.

  • Go to step 2.


Complexity Classes

  • Now that we have a formal computational model

  • We may begin to categorize problems

  • According to the resources TMs which compute them require.


Time Complexity

Definition: Let t:nn be a function.

TIME(t(n))={L | L is a language decidable by a O(t(n)) deterministic TM}

NTIME(t(n))={L | L is a language decidable by a O(t(n)) non-deterministic TM}


Example:

{ anbncn | n0 }  P

Polynomial Time

Definition:


The Towers of Hanoi

The Halting Problem

Minimum Spanning Tree

Which are in P and Which aren’t?


Non-Deterministic Polynomial Time

Definition:

Examples:

  • the TSP problem

  • the ILP problem


Observation

  • Claim:PNP

  • Proof: A deterministic Turing machine is a special case of non-deterministic Turing machines. 


Exponential Time

Definition:


Space Complexity

Definition: Let f:nn be a function.

SPACE(f(n))={L | L is a language decidable by an O(f(n)) space deterministic TM}

NSPACE(f(n))={L | L is a language decidable by an O(f(n)) space non-deterministic TM}


3Tape Machines

input

work

output

. . .

read-only

Only the size of the work tape is counted for complexity purposes

read/write

. . .

write-only

. . .


Logarithmic Space

Definition:


Example:

{ anbncn | n0 }  PSPACE

Polynomial Space

Definition:


Observation

  • Claim:PPSPACE

  • Proof: A TM which runs in time t(n) can use at most t(n) space. 


Observation

  • Claim:PSPACEEXPTIME

  • Proof:

    • A machine which uses polynomial space has at most exponential number of configurations (remember? ).

    • As deterministicmachine that halts may not repeat a configuration,

    • its running time is bounded by the number of possible configurations. 


Conjectured Relations Among Deterministic Classes

EXPTIME

PSPACE

P


Savitch’s Theorem

Theorem:S(n) ≥ log(n) NSPACE(S(n))  SPACE(S(n)2)


Immerman’s Theorem

Theorem:S(n) ≥ log(n)

NSPACE(s(n))

=co-NSPACE(s(n))


Speed-up and Compression


When is More Better ?

  • More time or space will allow you to compute more

  • This is not always true

    • Constant factor speed-up

    • Non Constructible time/space bounds: Gap theorems.

  • Compression theorems for constructible bounds


Constant Factor Speed-up

A Turing Machine Alphabet is easily compressed

by coding k symbols in one symbol of a larger

alphabet:

Sk--->S’

S’ = S3


Constant Factor Speed-up

This yields automatic constant factor speed-up

in space:

Space( S(n) ) = Space( S(n)/k )

Snags: Input is not compressed!

This may require additional steps and another

worktape.

It shows space speed-up for single tape model

only for w(n) bounds.

And what about Time?


The k for 6 solution

PREPARE

COMPUTE &

UPDATE

Snags: you must compress the alphabet 6 times

more dense than you expected.

Input must be preprocessed so it works only nice

for time t(n) = w(n) (even w(n2) in single tape model)


The direct solution

1172

1173

1176

1177

1174

1175

Encode two blocks in

finite control simulator

Turing Machine.

Externally scan the block

adjacent to the block

scanned internally

Finite

Control

Now one can allways simulate k steps for 1 step and

still preserve the above invariant after every step.


Time speed-up

THEOREM: Time( t(n) ) = Time( t(n)/ k )

for fixed k , as long as t(n)/k > (1+ e).n

Doesn’t work for single tape model; there the

input compression already requires time W(n2)

So in order that Time( t(n) )  Time( G(t(n)) )

it is necessary that G(m) = w(m)

This is however not sufficient......


Constructible Bounds

t(n) is time constructible when some TM on input

n (in binary) can initialize a binary counter with

value t(n) in time < t(n)

s(n) is space constructible when some TM on input

x of length n can mark a block of s(n) tape cells

without ever exceeding this block.

Against constructible boundseffective

diagonalization is possible


SPACE COMPRESSIONDownward Diagonalization

If S1(n) > log(n) is space constructible and

S2(n) = o(S1(n)) then

Space( S2(n) )  Space( S1(n) )

On input i # x :

1)mark S1(|i#x|) tape cells

2)simulate Mi( i#x ) within this block

if simulation leaves the block accept

if simulation cycles accept -

counting OK since S1(n) > log(n)

3)if simulation terminates do the opposite:

if accept reject and accept otherwise


SPACE COMPRESSIONDownward Diagonalization

This program runs in space S1(n) by construction

The result can’t be computed by any device in

space S2(n) . Assume Mj does it then on input

j#x for x sufficiently large, cases 1 and 2 won’t

occur and therefore Mj( j#x )accepts iff it rejects.....

CONTRADICTION !


TIME COMPRESSIONDownward Diagonalization

A similar result for Time Compression is affected

by the overhead required for maintaining the counter

ticking down from T1(n) to 0 . If we assume that

this overhead is logarithmic the result becomes:

If T1(n) > n is time constructible and

T2(n) = o(T1(n)) then

Time( T2(n) )  Time( T1(n).log(T1(n)) )


TIME COMPRESSIONDownward Diagonalization

Improvements:

Add an extra tape. Storing the clock on it makes

the overhead vanish.....

At least two tapes: Divide clock in head and tail

and move the head only when the tail underflows.

This reduces overhead to loglog(n); the trick

extends yielding log*(n) overhead (W. Paul)

Use distributed super redundant clock; overhead

vanishes (Fürer 1982)


Compression in General

The diagonalization argument is generic; the minimal

overhead determining the size of the separation gaps

is fully machine dependent.

Extends to the world of nondeterministic computation;

proof become rather complex. (Seiferas et al. for

TM time measure)

For the RAM world diagonalization results are similar;

Constant factor speed-up is difficult.


Constructibility ?

  • Reasonable bounds turn out to be constructible:

    • polynomials,

    • simple exponentials,

    • polylog functions

    • closed under sum & product

    • not closed under difference!


Constructibility ?

  • Many Theorems are proven assuming constructibility of bounds

  • Some theorems extend to general case, using trick of incremental resources

    • Savitch Theorem

    • Hopcroft, Paul, Valiant Theorem

    • resulting bounds are weak (terminating computations only)


Diagonalization

M

Yes

M’

No

x

Halting Problem is Undecidable

  • Assume a TM M that, given a TM M’ and input x, decides if M’ halts on x

  • Construct M”

  • Run M” on the representation of M” contradiction

M”

<M”>

Yes


Hierarchy Theorems


Space Hierarchy Theorem




Diagonalization

Thm: P  EXPTIME

Proof:We construct a language L  EXPTIMEhowever, L is not accepted by any TM running in polynomial time.

Let

L ={x | x= <M>#1c#1e#(01)*, M doesn’t accept xwithinc|x|etime }


P vs EXPTIME

L ={x | x= <M>#1c#1e#(01)*,M doesn’t accept xwithinc|x|etime }

Lemma: L EXPTIME

Proof: in fact in |x||x||x| time

Lemma: L  P

Proof: assume, by way of contradiction, a TM M that accepts every xL in time c|x|e run it on the string <M>#1c#1e# to arrive at contradiction


Conjectured Relation Between P, NP and co-NP

NP

Co-NP

P

Def: Co-NP = {*-L | LNP}


Summary

  • We presented two main computational models deterministic Turing machines and non-deterministic Turing machines.

  • We simulated NTM by DTM with an exponential loss of efficiency.


Summary

  • The Church-Turing thesis: Deterministic Turing Machines are equivalent to our intuitive notion of algorithms.

  • Keeping it in mind, we’ll usually describe algorithms in pseudo-code rather than as TMs.


Summary

  • Using Turing machines we’ve defined various complexity classes:

    • P – Polynomial time

    • NP – Non-deterministic Polynomial time

    • EXPTIME – Exponential time

    • L – Logarithmic space

    • NL – Non-deterministic Logarithmic space

    • PSPACE – Polynomial Space

  • And discussed the relations between them.


Tiling Games

Tile Type: square divided in 4

coloured triangles.

Infinite stock available

No rotations or reflections allowed

Tiling: Covering of region of the

plane such that adjacent tiles have

matching colours

Boundary condition: colours given along

(part of) edge of region, or some given

tile at some given position.


Turing Machine

Tape

Q: states

S: tape

symbols

Read/Write

Head

Finite Control

Program : P

P  (Q  )  (Q    {L,0,R}) :

(q,s,q’,s’,m)  P denotes the instruction:

when readingsin stateqprints’perform

movemand go to stateq’ . Nondterminism!


Computations

Configurationc : finite string in S*(Q) S*

Computation Stepc --> c’ obtained by

performing an instruction in P

Computation: sequence of steps

Final Configuration: no instruction applicable

Initial Configuration: start state & leftmost

symbol scanned

Complete Computation: computation starting

in initial configuration and terminating

in finite one

Accepting / Rejecting computation ....


Turing Machines and Tilings

Idea: tile a region and let successive

color sequences along rows correspond to

successive configurations.....

s

s

s

symbol

passing

tile

state

accepting

tiles

q

q

s

qs

qs

instruction

step

tiles

qs

qs

qs

q’

q’

s’

s’

q’s’

(q,s,q’,s’,L)

(q,s,q’,s’,0)

(q,s,q’,s’,R)

SNAG: Pairs of phantom heads appearing out of nowhere...

Solution: Right and Left Moving States....


Example Turing Machine

q0 1 0 1 1 B

0 q1 0 1 1 B

0 1 q0 1 1 B

0 1 0 q1 1 B

0 1 0 1 q1 B

0 1 0 1 1 qB

0 1 0 1 r1 B

0 1 0 r1 0 B

0 1 r0 0 0 B

0 1 1 0 0 B

K = {q,r,_}

S = {0,1,B}

P = { (q,0,q,0,R),

(q,1,q,1,R),

(q,B,r,B,L),

(r,0,_,1,0),

(r,1,r,0,L),

(r,B,_,1,0) }

Successor Machine;

adds 1 to a binary integer.

_ denotes empty halt state.

11 + 1 = 12


Reduction to Tilings

q0 1 0 1 1 B

0 q1 0 1 1 B

0 1 q0 1 1 B

0 1 0 q1 1 B

0 1 0 1 q1 B

0 1 0 1 1 qB

0 1 0 1 r1 B

0 1 0 r1 0 B

0 1 r0 0 0 B

0 1 1 0 0 B


Implementation in Hardware


Tiling reductions

space

initial configuration

Program : Tile Types

Input: Boundary

condition

Space: Width region

Time: Height region

blank

border

time

blank

border

accepting configuration/

by construction unique


Tiling Problems

Square Tiling: Tiling a given square with

boundary condition: Complete for NP.

Corridor Tiling: Tiling a rectangle with

boundary conditions on entrance and exit

(length is undetermined):

Complete for PSPACE .

Origin Constrained Tiling: Tiling the entire plane

with a given Tile at the Origin.

Complete for co-RE hence Undecidable

Tiling: Tiling the entire plain without constraints.

Still Complete for co-RE

(Wang/Berger’s Theorem). Hard to Prove!


ad
  • Login