1 / 22

Design and Evaluation of Dynamic Optimizations for a Java Just-In-Time Compiler

Design and Evaluation of Dynamic Optimizations for a Java Just-In-Time Compiler. Ramkrishna Vadali. Introduction. Java Virtual Machines (JVM) and Just-In-Time (JIT) compilers.

seven
Download Presentation

Design and Evaluation of Dynamic Optimizations for a Java Just-In-Time Compiler

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Evaluation of DynamicOptimizations for a Java Just-In-TimeCompiler Ramkrishna Vadali

  2. Introduction • Java Virtual Machines (JVM) and Just-In-Time (JIT) compilers. • This article describes the design and implementation of a dynamic optimization framework in a production-level Java JIT compiler, together with two techniques for profile-directed optimizations: method in lining and code specialization. • Two level Execution Model for JVM • Limitations of Two level execution.

  3. Background • Three major dynamic compilation techniques are available with some form of automatic, profile driven adaptive optimization. • Intel open Runtime Platform • Jikes RVM • HotSpot Server

  4. Points to evaluate the characteristics of the system • Whether the system takes a compile-only approach or uses an interpreter to allow for a mixed execution environment with interpreted and compiled code. • How the system monitors the application program to promote methods from a lower optimization level to a higher level. • What profile information the system collects online to be exploited by the higher optimization levels, and how it does so.

  5. Dynamic Optimization Framework • Goals of dynamic optimization system

  6. Discussion • This figure shows the summary of the four execution modes

  7. This figure shows both compilation overhead and performance impact with three difference policies.

  8. Evaluation Of Dynamic Optimization Framework • Evaluation Methodology

  9. Following parameters are used in the experiments. • The threshold in the mixed mode interpreter to initiate first dynamic compilation (with level-1 optimization) was set to 500. • The timer interval for the sampling profiler for detecting hot methods was 3 milliseconds. The controller examined the list of hot methods every 200 sampling ticks for recompilation decisions. The decay parameter was set to 0.3. • No profile-directed optimization was employed in these measurements, and thus no instrumentation profiling code was generated and installed. • The priority of the sampling profiler thread and the compilation thread was set above that of the application threads.

  10. Following are the sets of compilation schemes (1) No MMI configurations (compile-only approach): — Level-1 optimization after level-0 compilation (L0-L1), — Level-2 optimization after level-0 compilation (L0-L2), — Level-3 optimization after level-0 compilation (L0-L3), — Level-1 to 3 optimizations for adaptive recompilation after level-0 compilation (L0-all). (2) With MMI configurations: — Level-1 optimization with MMI (MMI-L1), — Level-2 optimization with MMI (MMI-L2), — Level-3 optimization with MMI (MMI-L3), — Level-1 to 3 optimizations for adaptive recompilation with MMI (MMIall).

  11. Application Startup Time Performance

  12. SteadyStatePerformance • Figure shows the performance and compilation overhead in the steady state program runs.

  13. Profile-DirectedOptimization • Dynamic instrumentation Profiler: Figure shows a graphic example when the instrumentation code is generated and installed in the target compiled code.

  14. Profile-Directed Method Inlining • Upon completion of collecting the call site information, the decision on requesting method in lining proceeds with the following steps: • Partial call graph construction: • Exact Call path identification • Method inlining request • Hotness count adjustment: • Recompilation request

  15. Dynamic Code Specialization • Enabled only on the highest optimization level based on the profile data collected in the previous version of the compiled code. Impact Analysis.

  16. Factors currently considered in the impact analysis include the following: • A constant value of a primitive. • An exact object type. • The length of an array object. • The type of an object such as null, non-null, normal object, or array object. • Equality of two parameter objects. • The thread locality of objects.

  17. Results • This section shows the impact on both performance and compilation overhead when applying these profile-directed optimizations, both separately and in combination. All the measurement conditions are the same as those described. Other conditions specific to the measurements here are as follows. • For the instrumentation-based profiling for hot methods, a maximum of 10,000 values were collected for each of the target parameters, global variables, or return addresses. The maximum number of data variations recorded was 8. • The number of code duplications allowed for specialization was set to one at a time, regardless of the target method code size. • The maximum number of level-3 compilations for the same method was set to three. This is for both profile-directed In lining and code specialization with different profile information.

  18. comparisons of both performance and compilation

  19. Future Work • In the future, the plan is to further refine the system to improve the cost and benefit of the profile-directed optimizations. For example, the consideration so far only the relative strengths and distributions of the call edges when driving profile-directed in lining. • In the long term, however, the essential problem of the dynamic optimization systems is to decide whether or not the optimization effort can be offset by the performance benefit for a given program. • The problem here is that we apply the same set of optimizations equally for those methods selected to compile at that optimization level, regardless of the type and characteristics of each method. It would be better for the total cost and benefit management, if we could not only selectively apply optimizations. • on performance-critical methods, but also selectively assemble or customize a set of optimizations depending on the characteristics of the target methods so that we can apply only those optimizations known to be effective for the given methods.

  20. Related Work • Dynamic Optimization System • Instrumentation • Profile-Directed Method In lining

  21. Conclusions • The design and implementation of the dynamic optimization framework, which consists of a mixed mode interpreter, a dynamic compiler with three optimization levels, a sampling profiler, a recompilation controller, and an instrument profiler. The experimental results show that the system can effectively work to initiate each level of optimization, and can achieve high performance and low compilation overhead in both program startup and steady state measurements in comparison to other configurations, including those with the compile-only approach. Owing to its zero compilation cost, the MMI allowed us to achieve an efficient recompilation system by setting appropriate trade-off levels for each transition between optimizations.

  22. Questions

More Related