Adaptive Optimization in the Jalapeño JVM - PowerPoint PPT Presentation

Adaptive optimization in the jalape o jvm l.jpg
Download
1 / 31

Adaptive Optimization in the Jalapeño JVM Matthew Arnold Stephen Fink David Grove Michael Hind Peter F. Sweeney Presentation by Michael Bond Talk overview Introduction: Background & Jalapeño JVM Adaptive Optimization System (AOS) Multi-level recompilation Miscellaneous issues

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

Adaptive Optimization in the Jalapeño JVM

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Adaptive optimization in the jalape o jvm l.jpg

Adaptive Optimization in the Jalapeño JVM

Matthew Arnold

Stephen Fink

David Grove

Michael Hind

Peter F. Sweeney

Presentation by Michael Bond


Talk overview l.jpg

Talk overview

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Miscellaneous issues

  • Feedback-directed inlining

  • Conclusion


Background l.jpg

Background

  • Three waves of JVMs:

    • First: Compile method when first encountered; use fixed set of optimizations

    • Second: Determine hot methods dynamically and compile them with more advanced optimizations

    • Third: Feedback-directed optimizations

  • Jalapeño JVM targets third wave, but current implementation is second wave


Jalape o jvm l.jpg

Jalapeño JVM

  • Written in Java (core services precompiled to native code in boot image)

  • Compiles at four levels: baseline, 0, 1, & 2

    • Why three levels of optimization?

  • Compile-only strategy (no interpretation)

    • Advantages? Disadvantages?

  • Yield points for quasi-preemptive switching

    • Advantages? (Disadvantages later)


Talk progress l.jpg

Talk progress

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Miscellaneous issues

  • Feedback-directed inlining

  • Conclusion


Adaptive optimization system l.jpg

Adaptive Optimization System


Aos design l.jpg

“Distributed, asynchronous, object-oriented design” useful for managing lots of data, say authors

Each successive pipeline (from raw data to compilation decisions) performs increasingly complex analysis on decreasing amounts of data

AOS: Design


Talk progress8 l.jpg

Talk progress

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Other issues

  • Feedback-directed inlining

  • Conclusion


Multi level recompilation l.jpg

Multi-level recompilation


Multi level recompilation sampling l.jpg

Multi-level recompilation:Sampling

  • Sampling occurs on thread switch

  • Thread switch triggered by clock interrupt

  • Thread switch can occur only at yield points

  • Yield points are method invocations and loop back edges

  • Discussion: Is this approach biased?


Multi level recompilation biased sampling l.jpg

Multi-level recompilation:Biased sampling

Short method

Code with no method calls or back edges

Long method

method call

method call


Multi level recompilation cost benefit analysis l.jpg

Multi-level recompilation: Cost-benefit analysis

  • Method m compiled at level i; estimate:

    • Ti, expected time program will spend executing m if m not recompiled

    • Cj, the cost of recompiling m at optimization level j, for i ≤ j ≤ N.

    • Tj, expected time program will spend executing method m if m recompiled at level j.

    • If, for best j, Cj + Tj < Ti, recompile m at level j.


Multi level recompilation cost benefit analysis continued l.jpg

Multi-level recompilation: Cost-benefit analysis (continued)

  • Estimate Ti :

    Ti = Tf * Pm

  • Tf is the future running time of the program

  • We estimate that the program will run for as long as it has run so far

    • Reasonable assumption?


Multi level recompilation cost benefit analysis continued14 l.jpg

Multi-level recompilation: Cost-benefit analysis (continued)

  • Pm is the percentage of Tf spent in m

    Pm estimated from sampling

  • Sample frequencies decay over time.

  • Why is this a good idea?

  • Could it be a disadvantage in certain cases?


Multi level recompilation cost benefit analysis continued15 l.jpg

Multi-level recompilation: Cost-benefit analysis (continued)

  • Statically-measured speedups Si and Si used to determine Tj:

    Tj = Ti * Si / Sj

  • Statically-measured speedups?!

  • Is there any way to do better?


Multi level recompilation cost benefit analysis continued16 l.jpg

Multi-level recompilation: Cost-benefit analysis (continued)

  • Cj (cost of recompilation) estimated using a linear model of speed for each optimization level:

    Cj = aj * size(m), where aj = constant for level j

  • Is it reasonable to assume a linear model?

  • OK to use statically-determined aj?


Multi level recompilation results l.jpg

Multi-level recompilation:Results


Multi level recompilation results continued l.jpg

Multi-level recompilation:Results (continued)


Multi level recompilation discussion l.jpg

Multi-level recompilation: Discussion

  • Adaptive multi-level compilation does better than JIT at any level in short term.

  • But in the long run, performance is slightly worse than JIT compilation.

  • The primary target is server applications, which tend to run for a long time.


Multi level recompilation discussion continued l.jpg

Multi-level recompilation: Discussion (continued)

  • So what’s so great about Jalapeño’s AOS?

  • Current AOS implementation gives good results for both short and long term – JIT compiler can’t do both cases well because optimization level is fixed.

  • The AOS can be extended to support feedback-directed optimizations such as

    • fragment creation (i.e., Dynamo)

    • determining if an optimization was effective


Talk progress21 l.jpg

Talk progress

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Miscellaneous issues

  • Feedback-directed inlining

  • Conclusion


Miscellaneous issues multiprocessing l.jpg

Miscellaneous issues:Multiprocessing

  • Authors say that if a processor is idle, recompilation can be done almost for free.

  • Why almost for free?

  • Are there situations when you could get free recompilation on a uniprocessor?


Slide23 l.jpg

You’re so hot!

Adaptively optimize me all night long!

AOS Controller

Hot method


Miscellaneous issues models vs heuristics l.jpg

Miscellaneous issues:Models vs. heuristics

  • Authors moving toward “analytic model of program behavior” and elimination of ad-hoc tuning parameters.

  • Tuning parameters proved difficult because of “unforeseen differences in application behavior.”

  • Is it believable that ad-hoc parameters can be eliminated and replaced with models?


Miscellaneous issues more intrusive optimizations l.jpg

Miscellaneous issues:More intrusive optimizations

  • The future of Jalapeño is more intrusive optimizations, such as compiler-inserted instrumentation for profiling

  • Advantages and disadvantages compared with current system?

  • Advantages:

    • Performance gains in the long term

    • Adjusts to phased behavior

  • Disadvantages:

    • Unlike with sampling, you can’t profile all the time

    • Harder to adaptively throttle overhead


Miscellaneous stack frame rewriting l.jpg

Miscellaneous:Stack frame rewriting

  • In the future, Jalapeño will support rewriting of a baseline stack frame with an optimized stack frame

  • Authors say that rewriting an optimized stack frame with an optimized stack frame is more difficult?

    • Why?


Talk progress27 l.jpg

Talk progress

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Miscellaneous issues

  • Feedback-directed inlining

  • Conclusion


Feedback directed inlining more cost benefit analysis l.jpg

Feedback-directed inlining: More cost-benefit analysis

  • Boost factor estimated:

    • Boost factor b is a function of

      • The fraction f of dynamic calls attributed to the call edge in the sampling-approximated call graph

      • Estimate s of the benefit (i.e., speedup) from eliminating virtually all calls from the program

    • Presumably something like b = f * s.


Feedback directed inlining results l.jpg

Feedback-directed inlining: Results

Why?

Why?


Talk progress30 l.jpg

Talk progress

  • Introduction: Background & Jalapeño JVM

  • Adaptive Optimization System (AOS)

  • Multi-level recompilation

  • Other issues

  • Feedback-directed inlining

  • Conclusion


Conclusion l.jpg

Conclusion

  • AOS designed to support feedback-directed optimizations (third wave)

  • Current AOS implementation only supports selective optimizations (second wave)

    • Improves short-term performance without hurting long term

    • Uses mix of cost-benefit model and ad-hoc methods.

  • Future work will use more intrusive performance monitoring (e.g., instrumentation for path profiling, checking that an optimization improved performance)


  • Login