1 / 9

Adaptive Optimization in the Jalape ñ o JVM

Adaptive Optimization in the Jalape ñ o JVM. M. Arnold et al. Presented by D. Spoonhower 5 March 2003. Background. Why JIT? Performance Safety JIT vs. Adaptive Cost/benefit analysis Feedback-driven JRVM: Compile-only Compare to HotSpot, et al. JRVM: “99%” Java

jeneil
Download Presentation

Adaptive Optimization in the Jalape ñ o JVM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Optimization in the Jalapeño JVM M. Arnold et al. Presented by D. Spoonhower 5 March 2003

  2. Background • Why JIT? • Performance • Safety • JIT vs. Adaptive • Cost/benefit analysis • Feedback-driven • JRVM: Compile-only • Compare to HotSpot, et al. • JRVM: “99%” Java • Modular design, simple implementation • Self-optimizing

  3. Jalapeño AOS Overview • Sampling-independent, e.g. • Hardware monitors • Invocation counters • Path profiles • Multiple optimization levels • Baseline – no register allocation • OptLevel0 – “on-the-fly” • OptLevel1 – flow-based • OptLevel2 – SSA (w/ arrays) • Profiling-based optimizations • Simple heuristics

  4. AOS Architecture • Runtime Measurement System • Raw data collection • “Organizers” • Controller • Direct measurement • Including intrusive profiling • Manage recompilation • Recompilation System • Plan = optimization + profiling data + instrumentation • AOS Database

  5. Recompilation: Sampling • Invocation count • Sample at context switch • Yield points in prologues and back edges • Low overhead • Setup: • Method listener • Hot method & decay organizers • Adaptive threshold • Set sample size • Set hotness threshold

  6. Recompilation:Cost/Benefit Analysis For each method m at opt level i: • Ti, future running time of m • Cj, cost to recompile to level j, for i≤ j ≤ N • Tj, future running time of m if recompiled at level j Choose j to minimize Cj + Tj and ensure that Cj + Tj < Ti

  7. Recompilation:Estimates • Program will execute for twice current duration, Tf= total future time • Sample to predict percent of time used by m (Pm) and compute: Ti = Tf * Pm • Offline measurements for speedup: Tj = Ti * Si / Sj • Use offline configuration of Cj

  8. Feedback-directed Inlining • Approximate dynamic call graph • Listener samples in prologue • Caller + call site + callee • Adaptive threshold • Start high and decay • Estimate benefit: boost factor • Fraction of dynamic calls at site • Offline analysis of inlining • Correctness?

  9. Opportunities • New measurements? • Alias tracking? • New optimization? • arraycheck or castcheck? • New cost analysis? • CPU utilization? • Fewer configuration parameters? • Source available at: http://www.research.ibm.com/jalapeno/ (Now called “Jikes RVM”)

More Related