00:00

Instruction Selection Phase in Compiler: Tree Tiling Algorithms

The instruction selection phase in compiler design involves finding the appropriate machine instructions to implement a given intermediate representation tree. This process includes defining tree patterns, optimizing tilings, and selecting the best algorithm for instruction selection, such as maximal munch or dynamic programming. Key concepts discussed include tree tiling, optimal vs. optimum tilings, algorithm comparison, grammar rules, and efficiency of tiling algorithms.

chiscano
Download Presentation

Instruction Selection Phase in Compiler: Tree Tiling Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Compiler Principle Prof. Dongming LU Apr. 29th, 2024

  2. Content … 8. Basic Blocks and Traces 9. Instruction Selection 10. Liveness Analysis 11. Register Allocation 13. Garbage Collection 14. Object-Oriented Language 18. Loop Optimizations

  3. 9. Instruction Selection

  4. OVERVIEW What's the task of instruction selection phase ? • Finding the appropriate machine instructions to implement a given intermediate representation tree. Note • A real machine instruction can often perform several of primitive operations.

  5. TREE PATTERNS • A machine instruction expressed as a fragment of an IR tree, called a tree pattern.

  6. Jouette architecture (for purposes of illustration)

  7. TREE PATTERNS Specification about Jouette architecture • Register r0 always contains zero. • A TEMP node is implemented as a register. • Some instructions correspond to more than one tree pattern.

  8. TREE PATTERNS The fundamental idea of instruction selection using a tree- based intermediate representation is tiling the IR tree. • The tiles:the set of tree patterns corresponding to legal machine instructions. A tree tiled in two ways

  9. TREE PATTERNS • Tile the tree with tiny tiles, each covering only one node. In our example, such a tiling looks like this: ADDI r1 ← r0 + a ADD r1 ← fp + r1 LOAD r1 ← M[r1 + 0] ADDI r2 ← r0 + 4 MUL r2 ← ri × r2 ADD r1 ← r1 = r2 ADDI r2 ← r0 + x ADD r2 ← fp + r2 LOAD r2 ← M[r2 + 0] STORE M[r1 + 0] ← r2

  10. OPTIMAL AND OPTIMUM TILINGS Best tiling •The shortest sequence of instructions; •Or the least-cost sequence has the lowest total time. Optimal & Optimum tiling •Optimum tiling: the one whose tiles sum to the lowest possible value. •Optimal tiling: the one where no two adjacent tiles can be combined into a single tile of lower cost. Every optimum tiling is also optimal, but not vice versa

  11. 9.1 ALGORITHMS FOR INSTRUCTION SELECTION

  12. Algorithms for optimal tilings are simpler than optimum tilings. •For CISC, the difference between optimum and optimal tilings is noticeable. •For RISC, there is usually no difference at all between optimum and optimal tilings.

  13. MAXIMAL MUNCH Main idea: Starting at the root of the tree, find the largest tile that fits. • Cover the root node and perhaps several other nodes near the root with this tile, leaving several subtrees. • Now repeat the same algorithm for each subtree. • The maximal munch algorithm generates the instructions in reverse order. • If two tiles of equal size match at the root, then the choice between them is arbitrary. • munchStm for statements and munchExp for expressions.

  14. T_exp e2=src; /*MOVE(TEMP I, e2)*/ munchExp(e2); emit(“ADD”); } else assert(0); /*destination of MOVE must be MEM or TEMP*/ break; Case TJUMP:… Case T_CJUMP:… …. Static Temp_temp munchExp(T_exp e) MEM(BINOP(PLUS, e1, CONST(i))) => muchExp(e1); emit(“LOAD”); MEM(BINOP(PLUS, CONST(i), e1)) => muchExp(e1); emit(“LOAD”); MEM(CONST(i)) => emit(“LOAD”); … … … BINOP(PLUS, e1, CONST(i)) => munchExp(e1);emit(“ADDI”); TEMP(t) => { }

  15. DYNAMIC PROGRAMMING Maximal munch vs dynamic programming Maximal munch: not necessarily an optimum • Dynamic programming: can find the optimum based on the optimum solution of each sub-problem • This algorithm works bottom-up. The one with the minimum-cost match is chosen.

  16. DYNAMIC PROGRAMMING For example: Several tiles match the + node: Now, several tiles match the MEM node:

  17. DYNAMIC PROGRAMMING • Once the cost of the root node (and thus the entire tree) is found, the instruction emission phase begins. • The algorithm is as follows:  Emission(node n): for each leaf li of the tile selected at node n, perform Emission(li).  Then emit the instruction matched at node n. Emission(n) does not recur on the children of node n, but on the leaves of the tile that matched at n.

  18. TREE GRAMMARS Brain-damaged version of Jouette: a registers for addressing, and d registers for "data."

  19. TREE GRAMMARS Use a context-free grammar to describe the tiles: • s (for statements), • a (for expressions calculated into an a register), • d (for expressions calculated into a d register).

  20. TREE GRAMMARS The grammar rules for the LOAD, MOVEA, and MOVED instructions: d → MEM(+(a, CONST)) d → MEM(+(CONST, a)) d → MEM(CONST) d → MEM(a) d → a a → d Ambiguous: • There are many different parses of the same tree, but a generalization of the dynamic-programming algorithm works quite well. Code-generator generator like YACC and LEX

  21. FAST MATCHING Maximal munch and the dynamic-programming algorithm examine all the tiles that match at a node. • A tile matches if each nonleaf node of the tile is labeled with the same operator (MEM, CONST, etc.) as the corresponding node of the tree. • To match a tile at node n of the tree, the label at n can be used in a case statement: match(n) { switch (label(n)) { case MEM: ... case BINOP: ... case CONST: ... }

  22. EFFICIENCY OF TILING ALGORITHMS How expensive are maximal munch and dynamic programming? • Maximal munch: proportional to (K′ + T′)N/K; • Dynamic programming: proportional to (K′ + T′)N K, K′, and T′ are constant, the running time of all of these algorithms is linear. T: different tiles; T’: different patterns(tiles ) at each node; K: average matching nodes; K’: the largest number of nodes; N: nodes in the input tree.

  23. The end of Chapter 9(1)

More Related