1 / 21

Optimizing Compilers CISC 673 Spring 2011 Overview of Compilers and JikesRVM

Optimizing Compilers CISC 673 Spring 2011 Overview of Compilers and JikesRVM. John Cavazos University of Delaware. Compiler Overview. Source program. Lexical analyzer. Syntax analyzer. Semantic analyzer. Symbol-table. Error handler. Code optimizer. Code generator. Target program.

mlinsey
Download Presentation

Optimizing Compilers CISC 673 Spring 2011 Overview of Compilers and JikesRVM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimizing CompilersCISC 673Spring 2011Overview of Compilers and JikesRVM John Cavazos University of Delaware

  2. Compiler Overview Source program Lexical analyzer Syntax analyzer Semantic analyzer Symbol-table Error handler Code optimizer Code generator Target program

  3. Intermediate Rep Gen IR Compiler Frontend • Series of passes • Source program – Written in a HLL • Lexical analysis – Convert keywords into “tokens” • Parser – Forms a syntax “tree” (statements, expressions, etc.) • Semantic analysis – Type checking, etc. • We will not cover the front end in this class! (See CISC 672) Source Program Lexical Analyzer Token Stream Parser Syntax Tree Semantic Analyzer Syntax Tree

  4. Compiler Middleend and Backend • This is where the “interesting” stuff happens … enough to fill an entire grad-level course • Code optimization – “improves” the intermediate code • Consists of machine independent & dependent opts • Code generation – register allocation, instruction scheduling IR Code Optimizer IR Code Generator Target program

  5. Traditional Optimizations • Analyze the program • Where is this value used? • Is this value recomputed? • Reduce the total number of operations • Common subexpression elimination • Strength reduction • Maintain values in registers • Elimination of redundant loads

  6. Developing an Optimization • Formulate the problem based on: • Intuition • Extract properties from a program • Implement the algorithm • Evaluate results based on test programs • Use results to refine the algorithm Formulate Implement Evaluate Refine

  7. Building an Optimizing Compiler • Strict requirements • Must be correct for all possible inputs • Must provide robust solution • Small changes in input should not produce wild changes in output • Good optimizing compilers are crafted • Careful selection of transformations • Careful use of algorithms and data structures

  8. Building an Optimizing Compiler • Compilers are engineered objects • Try to minimize running time of compiled code • Try to minimize compile time • Try to limit use of compile-time space • Try to keep engineering efforts reasonable • With all these constraints, results are … • unexpected!

  9. Quick Look at Real Compilers • Consider inline substitution • Replace procedure call with body of called procedure • Rename to handle naming issues • Widely used (and important!) for optimizing OOPs

  10. Characteristics of Function Inlining • Safety: almost always safe • Profitability: avoid overhead of a procedure call • Opportunity: inline leaf procedures • How well do compilers handle inlined code?

  11. Inliner Compiler Execute & time Source Program Compiler Execute & time Experimental Setup 5 Real Good Compilers

  12. The Study • Cooper/Hall/Torczon (Software-Practice & Experience 91) • Eight programs, five compilers, five processors • Eliminated over 99% of dynamic calls in 5 of programs • Measured speed of original versus transformed code • Expected uniform speed up, at least from call overhead • What really happened?

  13. Change in Execution Time

  14. Happens with Good Compilers! • Input code violated assumptions made by compiler writers • Longer procedures • More names • Different code shapes • Exacerbated problems that are hard to detect! • Imprecise analysis • Algorithms that scale poorly • Tradeoffs between global and local speed • Limitations in the implementations • The compiler writers were surprised (most of them)

  15. JikesRVM Translation From Bytecode to HIR HIR Jikes Front End Optimization of HIR Optimized HIR Translation From HIR to LIR LIR Optimization of LIR Optimized LIR Jikes Back End Translation From LIR To MIR MIR Optimization of MIR Optimized MIR Final Assembly Binary Code

  16. Levels of IR • HIR (High Level IR) • LIR (Low Level IR) • MIR (Machine Specific IR)

  17. HIR • Operators similar to Java bytecode • Example: ARRAYLENGTH, NEW, GETFIELD, BOUNDS_CHECK, NULL_CHECK • Symbolic registers instead of an implicit stack • Contains separate operators to implement explicit checks for run-time exceptions (eg., array-bounds checks)

  18. LIR • Details of JikesRVM runtime and object layout • Example: GET_TIB (vtable), INT_LOAD (for getfield) • Expands complicated HIR structures such as TABLE_SWITCH

  19. MIR • Similar to assembly code • Details of target architecture are introduced • Register Allocation is performed on MIR

  20. Project Discussion

  21. Next Time • Read the following Wikipedia pages (section) • Graph Theory (Basics) • Basic Blocks • Control Flow Graphs

More Related