1 / 20

The meaning of it all!

The meaning of it all!. The role of finite automata and grammars in compiler design. Compiler-Compilers!. There are many applications of automata and grammars inside and outside computer science, the main applications in computer science being in the area of compiler design.

shanta
Download Presentation

The meaning of it all!

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The meaning of it all! The role of finite automata and grammars in compiler design

  2. Compiler-Compilers! There are many applications of automata and grammars inside and outside computer science, the main applications in computer science being in the area of compiler design. Suppose we have designed (on paper) a new programming language with nice features. We have worked out their syntax and the way they should work. Now, all we need is a compiler for this language. It’s too complex to write a compiler from scratch! What we could do is to make use of theoretical tools like automata and grammars that recognize/generate strings of symbols of various kinds* and formally specify the syntax of the new computer programming language. Such a formal specification (plus other details) can be used by “magical programs” known as compiler-compilers to automatically generate a compiler for the programming language! But for these theoretical tools we would have to spell out the syntax of a language in, say, plain English which will not be precise enough and programs would find it hard to “understand” such a description to be able to generate a compiler on its own. * A program can be viewed as a (very long!) string that adheres to certain rules dictated by the programming language.

  3. Admiral Grace Hopper, Pioneer of compiler design

  4. Lexical Analysis constant keyword Raw stream of characters Stream of tokens identifier Symbol Table i 10 Lexical Analyzer for(i=0;i<=10;i++) for ( i = 0 ; i <= 10 ; i ++ )

  5. Parsing Statement assignment FOR-statement for ( exp ; exp ; exp ) statement exp exp id ++ Assign_stmt <= i id = exp id const i const i 10 0 For ( I = 0 ; I <= 10 ; I ++ ) ‘Parse’ – to relate

  6. other than letter/digit H I E W L 2 3 5 0 1 4 other than letter/digit 13 18 10 2 6 O R F 7 8 9 other than letter/digit I F 11 12 other than letter/digit L E S E 14 15 17 16 other than letter / digit letter 0 1 letter, digit Finite state automata as lexical analysers Automaton for recognizing keywords Automaton for recognizing identifiers

  7. C Automata for recognizing identifiers other than letter / digit letter A B letter, digit Converting a Finite state automaton into a computer program A: Read next_char If next_char isa lettergoto B else FAIL( ) B: Read next_char If next_char iseithera letter or a digitgoto B else goto C FAIL( ) is a function that “puts back” the character just read and starts up the next transition diagram. Note: Instead of using “A” and “B” as labels for GOTO statements, one could use them as names of individual functions/procedures that can be invoked.

  8. Grammars as syntax specification tools Finite state automata are used to describe tokens. Grammars are much more “expressive” than finite state automata and can be used to describe more complicated syntactical structures in a program---for instance, the syntax of a FOR statement in C language. Grammars only describe/generate strings. We need a process which, given an input string (a statement in a program, say), pronounces whether it is derivable from a given grammar or not. Such a process is known as parsing.

  9. a c e A B Types of Parsing (i) a b b c d e S  aAcBe A  Ab | b B  d B A A bottom up S top down S Reducingthe input string to the start symbol (ii) We take a “chunk” of the input string and REDUCE it to (replaceit with) the symbol on LHS of a production rule. In other words, the parse tree is constructed by beginning at the leaves and working up towards the root. d A b b “Expanding” the start symbol down to the input string “EXPANDING” the start symbol (according to production rules of the given grammar), and subsequently every non-terminal symbol that occurs in the “expansion”* till we arrive at the input string. (* technically, it is called a sentential form)

  10. B c A A A a a a S $ $ $ $ Shift-Reduce: a bottom up parsing technique a b b c d e $ S  aAcBe A  Ab | b B  d Input e d c b b a $ We shift symbols from input string (from left to right) usually onto a stack so that the “chunk” of symbols (matching the RHS of a production) which is to be reduced to the corresponding LHS wiil eventually appear on top of stack. (The chunk getting reduced is referred to as the “handle”.)

  11. What is a handle? A substring of the input string that matches the RHS of a production replacing which (by the corresponding LHS) would eventually lead to a reduction to the start symbol is called a handle. A Right Most Derivation (RMD) S  aAcBe A  Ab | b B  d S  aAcBeC  aAcde Bottom up parsing can be viewed as “RMD in reverse direction”.  aAbcde Non-terminal symbols on the right get expanded first before those on the left get expanded. When we do this in reverse, though, (now reducing symbols---not expanding) pieces of string on the left get reduced first before those on the right.  abbcde

  12. b A A A b a a a a $ $ $ $ A The problem with discovering handles Discovering the handle may not be easy always! There may be more than one substring emerging on top-of-stack that matches the RHS of a production. S  aAcBe A  Ab | b B  d a b b c d e A A There’s no way a AAcdecan be reduced to S. (When we make an incorrect choice of handle we get stuck half-way through, before we can arrive at the start symbol.) ?

  13. The problem with discovering handles In the exercises we did, we took decisions as to when to shift and when to reduce symbols (by ourselves, using our cleverness!). However, these can (and must) be done automatically by the parser program in tune with the given grammar. The well-known LR parser can do this and is beyond our present scope.

  14. Top down parsing Formal : Construct parse tree (for the input) by beginning at the root and creating the nodes (of the tree) in preorder. In other words, it’s an attempt to find a Left Most Derivation for an input string. Informal : Instead of starting to work on the input string and reduce it to the start symbol (by replacing “chunks” of it with non-terminal symbols), we begin with the start symbol itself and ask: “How can I expand this in order to arrive (eventually) at the input string?” We ask the same question for every non-terminal symbol occurring in the resulting expansions. We choose an appropriate expansion of a certain non-terminal by glancing at the input string, i.e. by taking cues from the symbol being scanned (and also the next few symbols) in the input.

  15. Input c a d S c a d c d S A c a d c d A S c a d c d A a Top down parsing: an example S  cAd A  ab | a Start with S. Only one expansion is possible. Now, how to expand A? Try every expansion one by one! S match! (so, move on!) a c A d (iii) match! mismatch! (so, try another expansion) a b OK! It matches with the first symbol in the input. (i) (ii) match! (we’re done!) (iv)

  16. Top down parsing: an example A program to do top-down parsing might use a separate procedure for every non-terminal function A( ) { isave = input_pointer; if input_symbol = ‘a’ then { ADVANCE( ); if input_symbol = ‘b’ then { ADVANCE( ); return TRUE; } } input_pointer = isave; /* Try second expansion */ if input_symbol = ‘a’ then { ADVANCE( ); return TRUE; } return FALSE; } function S( ) { if input_symbol = ‘c’ then { ADVANCE( ); if A( ) then { if input_symbol = ‘d’ then { ADVANCE( ); return TRUE; } } } return FALSE; }

  17. S Input c a b d c d A mismatch! a Problems with this approach (i) Order in which the expansions are tried S  cAd A  a | ab d is part of S and hence a new expansion for S will be tried (in vain!). Hence, cabd will be rejected as invalid (but is actually valid). Remedy: Rewrite grammar so that there is no more than ONE expansion for every non-terminal sharing the same “prefix”; use “left factoring” to realise this. match match

  18. Problems with this approach (ii) Left recursion A Aα Production rule of the said form exhibits (immediate) left recursion. (More precisely,) a grammar has left recursion if, at some point, A “yields” Aα, i.e. if Aαcan be derived from A in one or more steps. Why is left recursion dangerous? It’s because the function A( ) corresponding to non-terminal A) will be forced to invoke itself repeatedly and endlessly. Remedy: To eliminate left recursion (from the grammar)!

  19. Eliminating (immediate) left recursion A Aα | β A  A α  A αα  A ααα βααα A β A’ A’  αA’ | ε E TE’ E’  +TE’ | ε T  FT’ T’  *FT’ | ε F ( E ) | id e.g. E E + T | T T T * F | F F  ( E ) | id

  20. Left factoring A αβ | αγ A αA’ A’  β | γ e.g. S iCtS | iCtSeS | a C  b S iCtSS’ | a S’  eS | ε C  b

More Related