1 / 41

Theory of Compilation

Lecture 02 – Lexical Analysis. Theory of Compilation. Eran Yahav. Source text . txt. Executable code. exe. You are here. Compiler. Lexical Analysis. Syntax Analysis Parsing. Semantic Analysis. Inter. Rep. (IR). Code Gen. Source text . txt. Executable code. exe.

biana
Download Presentation

Theory of Compilation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 02 – Lexical Analysis Theory of Compilation EranYahav

  2. Source text txt Executable code exe You are here Compiler LexicalAnalysis Syntax Analysis Parsing SemanticAnalysis Inter.Rep. (IR) Code Gen.

  3. Source text txt Executable code exe You are here… Process text input LexicalAnalysis SyntaxAnalysis Sem.Analysis tokens characters AST Annotated AST Back End Codegeneration Intermediate code optimization Intermediate code generation IR IR Symbolic Instructions Target code optimization Machine code generation Write executable output SI MI

  4. From characters to tokens • What is a token? • Roughly – a “word” in the source language • Identifiers • Values • Language keywords • Really - anything that should appear in the input to syntax analysis • Technically • Usually a pair of (kind,value)

  5. Example Tokens

  6. Strings with special handling

  7. Some basic terminology • Lexeme (aka symbol) - a series of letters separated from the rest of the program according to a convention (space, semi-column, comma, etc.) • Pattern - a rule specifying a set of strings.Example: “an identifier is a string that starts with a letter and continues with letters and digits” • Token- a pair of (pattern, attributes)

  8. x = b*b – 4*a*c txt From characters to tokens TokenStream <ID,”x”> <EQ> <ID,”b”> <MULT> <ID,”b”> <MINUS> <INT,4> <MULT> <ID,”a”> <MULT> <ID,”c”>

  9. pi = 3.141.562 pi = 3oranges pi = oranges3 txt txt txt Errors in lexical analysis Illegal token Illegal token <ID,”pi”>, <EQ>, <ID,”oranges3”>

  10. Error Handling • Many errors cannot be identified at this stage • Example: “fi (a==f(x))”. Should “fi” be “if”? Or is it a routine name? • We will discover this later in the analysis • At this point, we just create an identifier token • Sometimes the lexeme does not match any pattern • Easiest: eliminate letters until the beginning of a legitimate lexeme • Alternatives: eliminate/add/replace one letter, replace order of two adjacent letters, etc. • Goal: allow the compilation to continue • Problem: errors that spread all over

  11. How can we define tokens? • Keywords – easy! • if, then, else, for, while, … • Identifiers? • Numerical Values? • Strings? • Characterize unbounded sets of values using a bounded description?

  12. Regular Expressions Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.

  13. Regular Expressions over 

  14. Examples • ab*|cd? = • (a|b)* = • (0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9)* =

  15. Escape characters • What is the expression for one or more + symbols? • (+)+ won’t work • (\+)+ will • backslash \ before an operator turns it to standard character • \*, \?, \+, …

  16. Shorthands • Use names for expressions • letter = a | b | … | z | A | B | … | Z • letter_ = letter | _ • digit = 0 | 1 | 2 | … | 9 • id = letter_ (letter_ | digit)* • Use hyphen to denote a range • letter = a-z | A-Z • digit = 0-9

  17. Examples • if = if • then = then • relop= < | > | <= | >= | = | <> • digit = 0-9 • digits = digit+

  18. Example • A number is number = ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+ (  | . ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+ (  | E ( 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 )+) ) • Using shorthands it can be written as number = digits (Є | .digits (Є | e (Є|+|-) digits ) )

  19. Ambiguity • if = if • id = letter_ (letter_ | digit)* • “if” is a valid word in the language of identifiers… so what should it be? • How about the identifier “iffy”? • Solution • Always find longest matching token • Break ties using order of definitions… first definition wins (=> list rules for keywords before identifiers)

  20. Creating a lexical analyzer • Input • List of token definitions (pattern name, regex) • String to be analyzed • Output • List of tokens • How do we build an analyzer?

  21. Character classification #define is_end_of_input(ch) ((ch) == ‘\0’); #define is_uc_letter(ch) (‘A’<= (ch) && (ch) <= ‘Z’) #define is_lc_letter(ch) (‘a’<= (ch) && (ch) <= ‘z’) #define is_letter(ch) (is_uc_letter(ch) || is_lc_letter(ch)) #define is_digit(ch) (‘0’<= (ch) && (ch) <= ‘9’) …

  22. Main reading routine void get_next_token() { do { char c = getchar(); switch(c) { case is_letter(c) : return recognize_identifier(c); case is_digit(c) : return recognize_number(c); … } while (c != EOF);

  23. But we have a much better way! • Generate a lexical analyzer automatically from token definitions • Main idea • Use finite-state automata to match regular expressions

  24. Overview • Construct a nondeterministic finite-state automaton (NFA) from regular expression (automatically) • Determinize the NFA into a deterministic finite-state automaton (DFA) • DFA can be directly used to identify tokens

  25. Reminder: Finite-State Automaton • Deterministic automaton • M = (,Q,,q0,F) •  - alphabet • Q – finite set of state • q0 Q – initial state • F  Q – final states • δ : Q   Q - transition function

  26. Reminder: Finite-State Automaton • Non-Deterministic automaton • M = (,Q,,q0,F) •  - alphabet • Q – finite set of state • q0 Q – initial state • F  Q – final states • δ : Q ( {}) →2Q- transition function • Possible -transitions • For a word w, M can reach a number of states or get stuck. If some state reached is final, M accepts w.

  27. From regular expressions to NFA • Step 1: assign expression names and obtain pure regular expressions R1…Rm • Step 2: construct an NFA Mi for each regular expression Ri • Step 3: combine all Mi into a single NFA • Ambiguity resolution: prefer longest accepting word

  28. Basic constructs R =  R =  a R = a

  29. Composition M1 R = R1 | R2     M2 R = R1R2    M1 M2

  30. Repetition  R = R1*   M1 

  31. What now? • Naïve approach: try each automaton separately • Given a word w: • Try M1(w) • Try M2(w) • … • Try Mn(w) • Requires resetting after every attempt

  32. Combine automata combines a abb a*b+ abab a 1 a 2  abb a b b 3 4 5 6  0 a b a*b+  7 b 8  abab a b a b 9 10 11 12 13

  33. Ambiguity resolution • Recall… • Longest word • Tie-breaker based on order of rules when words have same length • Recipe • Turn NFA to DFA • Run until stuck, remember last accepting state, this is the token to be returned

  34. Corresponding DFA abb a*b+ b a*b+ 8 6 8 b b a b 0 1 3 7 9 7 a abab a 2 4 7 10 5 8 11 12 13 b a b a a*b+

  35. Examples abb a*b+ b a*b+ 8 6 8 b b a b 0 1 3 7 9 7 a abab a 2 4 7 10 5 8 11 12 13 b a b a a*b+ abaa: gets stuck after aba in state 12, backs up to state (5 8 11) pattern is a*b+, token is ab abba: stops after second b in (6 8), token is abb because it comes first in spec

  36. Summary of Construction • Describes tokens as regular expressions (and decides which attributes are saved for each token) • Regular expressions turned into a DFA that describes expressions and specifies which attributes to keep • Lexical analyzer simulates the run of an automata with the given transition table on any input string

  37. Good News • All of this construction is done automatically by common tools • lex is your friend • Automatically generates a lexical analyzer from declaration file • Advantages: short declaration file, easily checked, easily modified and maintained Declaration file • Intuitively: • Lex builds DFA table • Analyzer simulates (runs) the DFA on a given input lex LexicalAnalysis tokens characters

  38. Lex declarations file %{ #include “lex.h” Token_Type Token; intline_number=1 %} whitespace [ \t] letter [a-zA-Z] digit [0-9] … %% {digit}+ {return INTEGER;} {identifier} {return IDENTIFIER;} {whitespace} { /* ignore whitespace */ } \n { line_number++;} . { return ERROR; } … %% void start_lex(void){} void get_next_token(void) {…}

  39. Summary • Lexical analyzer • Turns character stream into token stream • Tokens defined using regular expressions • Regular expressions -> NFA -> DFA construction for identifying tokens • Automated constructions of lexical analyzer using lex

  40. Coming up next time • Syntax analysis

  41. NFA vs. DFA • (a|b)*a(a|b)(a|b)…(a|b) n times

More Related