1 / 38

CSC 8505 Compiler Construction

CSC 8505 Compiler Construction. Intermediate Representations. The Role of Intermediate Code. lexical analysis syntax analysis static checking. intermediate code generation. final code generation. final code. source code. tokens. intermediate code. Why Intermediate Code?.

dean-holt
Download Presentation

CSC 8505 Compiler Construction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC 8505Compiler Construction Intermediate Representations

  2. The Role of Intermediate Code lexical analysis syntax analysis static checking intermediate code generation final code generation final code source code tokens intermediate code

  3. Why Intermediate Code? • Closer to target language. • simplifies code generation. • Machine-independent. • simplifies retargeting of the compiler. • Allows a variety of optimizations to be implemented in a machine-independent way. • Many compilers use several different intermediate representations.

  4. Different Kinds of IRs • Graphical IRs: the program structure is represented as a graph (or tree) structure. Example: parse trees, syntax trees, DAGs. • Linear IRs: the program is represented as a list of instructions for some virtual machine. Example: three-address code. • Hybrid IRs: combines elements of graphical and linear IRs. Example: control flow graphs with 3-address code.

  5. Types of Intermediate Languages • Graphical Representations. • Consider the assignment a:=b*-c+b*-c: assign assign + a + a * * * uminus b uminus uminus b c c b c

  6. Graphical IRs 1: Parse Trees • A parse tree is a tree representation of a derivation during parsing. • Constructing a parse tree: • The root is the start symbol S of the grammar. • Given a parse tree for  X , if the next derivation step is  X    1…n  then the parse tree is obtained as:

  7. Graphical IRs 2: Abstract Syntax Trees (AST) A syntax tree shows the structure of a program by abstracting away irrelevant details from a parse tree. • Each node represents a computation to be performed; • The children of the node represents what that computation is performed on.

  8. Abstract Syntax Trees: Example Grammar : E  E + T | T T  T * F | F F  ( E ) | id Input: id + id * id Parse tree: Syntax tree:

  9. Syntax Trees: Structure • Expressions: • leaves: identifiers or constants; • internal nodes are labeled with operators; • the children of a node are its operands. • Statements: • a node’s label indicates what kind of statement it is; • the children correspond to the components of the statement.

  10. Graphical IRs 3: Directed Acyclic Graphs (DAGs) A DAG is a contraction of an AST that avoids duplication of nodes. • reduces compiler memory requirements; • exposes redundancies. E.g.: for the expression (x+y)*(x+y), we have: AST: DAG:

  11. Linear IRs • A linear IR consists of a sequence of instructions that execute in order. • “machine-independent assembly code” • Instructions may contain multiple operations, which (if present) execute in parallel. • They often form a starting point for hybrid representations (e.g., control flow graphs).

  12. Linear IR 1: Three Address Code • Instructions are of the form ‘x = y op z,’ where x, y, z are variables, constants, or “temporaries”. • At most one operator allowed on RHS, so no ‘built-up” expressions. Instead, expressions are computed using temporaries (compiler-generated variables). • The specific set of operators represented, and their level of abstraction, can vary widely.

  13. Three Address Code: Example • Source: if ( x + y*z > x*y + z) a = 0; • Three Address Code: t1 = y*z t2 = x+t1 // x + y*z t3 = x*y t4 = t3+z // x*y + z if (t2  t4) goto L a = 0 L:

  14. Three Address Code • Statements of general form x:=y op z • No built-up arithmetic expressions are allowed. • As a result, x:=y + z * wshould be represented ast1:=z * wt2:=y + t1x:=t2

  15. Three Address Code • Observe that given the syntax-tree or the dag of the graphical representation we can easily derive a three address code for assignments as above. • In fact three-address code is a linearization of the tree. • Three-address code is useful: related to machine-language/ simple/ optimizable.

  16. Example of 3-address code t1:=- c t2:=b * t1 t5:=t2 + t2 a:=t5 t1:=- ct2:=b * t1t3:=- ct4:=b * t3t5:=t2 + t4a:=t5

  17. Types of Three-Address Statements Assignment Statement: x:=y op z Assignment Statement: x:=op z Copy Statement: x:=z Unconditional Jump: goto L Conditional Jump: if x relop y goto L Stack Operations: Push/pop

  18. Types of Three-Address Statements Procedure: param x1param x2…paramxncall p,n Index Assignments: x:=y[i] x[i]:=y Address and Pointer Assignments: x:=&y x:=*y *x:=y

  19. Assignment: x = y op z (op binary) x = op y (op unary); x = y Jumps: if ( x op y ) goto L (L a label); goto L Pointer and indexed assignments: x = y[ z ] y[ z ] = x x = &y x = *y *y = x. Procedure call/return: param x, k (x is the kth param) retval x call p enter p leave p return retrieve x Type Conversion: x = cvt_A_to_B y (A, B base types) e.g.: cvt_int_to_float Miscellaneous label L An Example Intermediate Instruction Set

  20. Three Address Code: Representation • Each instruction represented as a structure called a quadruple (or “quad”): • contains info about the operation, up to 3 operands. • for operands: use a bit to indicate whether constant or Symbol Table pointer. E.g.: x = y + zif ( x  y ) goto L

  21. Implementations of 3-address statements • Quadruples t1:=- c t2:=b * t1 t3:=- c t4:=b * t3 t5:=t2 + t4 a:=t5 Temporary names must be entered into the symbol table as they are created.

  22. Implementations of 3-address statements, II • Triples t1:=- c t2:=b * t1 t3:=- c t4:=b * t3 t5:=t2 + t4 a:=t5 Temporary names are not entered into the symbol table.

  23. Other types of 3-address statements • e.g. ternary operations like x[i]:=y x:=y[i] • require two or more entries. e.g.

  24. Implementations of 3-address statements, III • Indirect Triples

  25. Linear IRs 2: Stack Machine Code • Sometimes called “One-address code.” • Assumes the presence of an operand stack. • Most operations take (pop) their operands from the stack and push the result on the stack. • Example: code for “x*y + z”

  26. Stack Machine Code: Features • Compact • the stack creates an implicit name space, so many operands don’t have to be named explicitly in instructions. • this shrinks the size of the IR. • Necessitates new operations for manipulating the stack, e.g., “swap top two values”, “duplicate value on top.” • Simple to generate and execute. • Interpreted stack machine codes easy to port.

  27. Linear IRs 3: Register Transfer Lang. (GNU RTL) • Inspired by (and has syntax resembling) Lisp lists. • Expressions are not “flattened” as in three-address code, but may be nested. • gives them a tree structure. • Incorporates a variety of machine-level information.

  28. RTLs (cont’d) Low-level information associated with an RTL expression include: • “machine modes” – gives the size of a data object; • information about access to registers and memory; • information relating to instruction scheduling and delay slots; • whether a memory reference is “volatile.”

  29. RTLs: Examples Example operations: • (plus:mxy), (minus:mx y), (compare:mx y), etc., where m is a machine mode. • (cond [test1value1test2value2 …] default) • (set lval x) (assigns x to the place denoted by lval). • (call funcargsz), (return) • (parallel [x0 x1 …]) (simultaneous side effects). • (sequence [ins1 ins2 … ])

  30. RTL Examples (cont’d) • A call to a function at address a passing n bytes of arguments, where the return value is in a (“hard”) register r: (set (reg:m r) (call (mem:fma) n)) • here m and fm are machine modes. • A division operation where the result is truncated to a smaller size: (truncate:m1 (div:m2x (sign_extend:m2y)))

  31. Hybrid IRs • Combine features of graphical and linear IRs: • linear IR aspects capture a lower-level program representation; • graphical IR aspects make control flow behavior explicit. • Examples: • control flow graphs • static single assignment form (SSA).

  32. Hybrid IRs 1: Control Flow Graphs Example: L1: if x > y goto L0 t1 = x+1 x = t1 L0: y = 0 goto L1 Definition: A control flow graph for a function is a directed graph G = (V, E) such that: • each v V is a straight-line code sequence (“basic block”); and • there is an edge a  b  E iff control can go directly from a to b.

  33. Basic Blocks • Definition: A basic block B is a sequence of consecutive instructions such that: • control enters B only at its beginning; and • control leaves B only at its end (under normal execution); and • This implies that if any instruction in a basic block B is executed, then all instructions in B are executed. • for program analysis purposes, we can treat a basic block as a single entity.

  34. Identifying Basic Blocks • Determine the set of leaders, i.e., the first instruction of each basic block: • the entry point of the function is a leader; • any instruction that is the target of a branch is a leader; • any instruction following a (conditional or unconditional) branch is a leader. • For each leader, its basic block consists of: • the leader itself; • all subsequent instructions upto, but not including, the next leader.

  35. Example int dotprod(int a[], int b[], int N) { int i, prod = 0; for (i = 1; i  N; i++) { prod += a[i]b[i]; } return prod; }

  36. Hybrid IRs 2: Static Single Assignment Form • The Static Single Assignment (SSA) form of a program makes information about variable definitions and uses explicit. • This can simplify program analysis. • A program is in SSA form if it satisfies: • each definition has a distinct name; and • each use refers to a single definition. • To make this work, the compiler inserts special operations, called -functions, at points where control flow paths join.

  37. SSA Form:  - Functions • A -function behaves as follows: x1 = … x2 = … x3 =  (x1, x2) This assigns to x3 the value of x1, if control comes from the left, and that of x2 if control comes from the right. • On entry to a basic block, all the -functions in the block execute (conceptually) in parallel.

  38. SSA Form: Example Example: Original codeCode in SSA form

More Related