1 / 25

International Symposium on Computer Architecture, June 2008. Beijing, China.

Alex Shye, Berkin Ozisikyilmaz, Arindam Mallik, Gokhan Memik, Peter A. Dinda, Robert P. Dick, and Alok N. Choudhary Northwestern University, EECS. Learning and Leveraging the Relationship between Architecture-Level Measurements and Individual User Satisfaction.

agrata
Download Presentation

International Symposium on Computer Architecture, June 2008. Beijing, China.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Alex Shye, Berkin Ozisikyilmaz, Arindam Mallik, Gokhan Memik, Peter A. Dinda, Robert P. Dick, and Alok N. Choudhary Northwestern University, EECS Learning and Leveraging the Relationship between Architecture-Level Measurements and Individual User Satisfaction International Symposium on Computer Architecture, June 2008. Beijing, China.

  2. Overall Summary Findings/Contributions • User satisfaction is correlated to CPU performance • User satisfaction is non-linear, application-dependent, and user-dependent • We can use hardware performance counters to learn and leverage user satisfaction to optimize power consumption while maintaining satisfaction Claim: Any optimization ultimately exists to satisfy the end user Claim: Current architectures largely ignore the individual user

  3. Why care about the user? User-centric applications Optimization opportunity Architectural trade-offs exposed to the user 3 1 2 User variation = optimization potential

  4. Performance vs. User Satisfaction User Satisfaction ? Your favorite metric (IPS, throughput, etc.)

  5. Current Architectures Performance Level ?

  6. Our Goal Performance Level Leverage knowledge for optimization Learn relationship between user satisfaction and hardware performance

  7. Measuring Performance • Hardware performance counters are supported on all modern processors • Low overhead • Non-intrusive • WinPAPI interface; 100Hz • For each HPC: • Maximum • Minimum • Standard deviation • Range • Average

  8. User Study Setup • IBM Thinkpad T43p • Pentium M with Intel Speedstep • Supports 6 Frequencies (2.2Ghz -- 800Mhz) • Two user studies: • 20 users each • First to learn about user satisfaction • Second to show we can leverage user satisfaction • Three multimedia/interactive applications: • Java game: A first-person-shooter tank game • Shockwave: A 3D shockwave animation • Video: DVD-quality MPEG video

  9. First User Study • Goal: • Learn relationship between HPCs and user satisfaction • How: • Randomly change performance/frequency • Collect HPCs • Ask the user for their satisfaction rating!

  10. Correlation to HPCs • Compare each set of HPC values with user satisfaction ratings • Collected 360 satisfaction levels (20 users, 6 frequencies, 3 applications) • 45 metrics per satisfaction level • Pearson’s Product Moment Correlation Coefficient (r) • -1: negative linear correlation, 1: positive linear correlation • Strong correlation: 21 of 45 metrics over .7 r value

  11. Correlation to the Individual User • Combine all user data • Fit into a neural network • Inputs: HPCs and user ID • Output: User satisfaction • Observe relative importance factor • User more than two times more important than the second-most important factor • User satisfaction is highly user-specific! User Satisfaction HPCs User ID

  12. Performance vs. User Satisfaction • User satisfaction is often non-linear • User satisfaction is application-specific • Most importantly, user satisfaction is user-specific

  13. Leveraging User Satisfaction • Observations: • User satisfaction is non-linear • User satisfaction is application dependent • User satisfaction is user dependent • All three represent optimization potential! • Based on observations, we construct Individualized DVFS (iDVFS) • Dynamic voltage and frequency scaling (DVFS) effective for improving power consumption • Common DVFS schemes (i.e., Windows XP DVFS, Linux ondemand governor) are based on CPU-utilization

  14. Individualized DVFS (iDVFS) Learning/Modeling Stage Building correlation network based on counters stats and user feedback User Satisfaction Feedback Hardware counter states Predictive user-aware Dynamic Frequency Scaling User-aware performance prediction model Hardware counter states Runtime Power Management

  15. iDVFS – Learning/Modeling • Train per-user and per-application • Small training set! • Two modifications to neural network training • Limit inputs (used two highest correlation HPCs) • BTAC_M-average and TOT_CYC-average • Repeated trainings using most accurate NN HPCs User Satisfaction

  16. iDVFS – Control Algorithm • ρ: user satisfaction tradeoff threshold • αf: per frequency threshold • M: maximum user satisfaction • Greedy approach • Make prediction every 500ms • If within user satisfaction within αfρ of M twice in a row, decrease frequency • If not, increase frequency and is αf decreased to prevent ping-ponging between frequency

  17. Second User Study • Goal: • Evaluate iDVFS with real users • How: • Users randomly use application with iDVFS and with Windows XP DVFS • Afterwards, users asked to rate each one • Frequency logs maintained through experiments • Replayed through National Instruments DAQ for system power

  18. Example Trace- Shockwave • iDVFS can scale frequency effectively based upon user satisfaction • In this case, we slightly decrease power compared to Windows DVFS

  19. Example- Video • iDVFS significantly improves power consumption • Here, CPU utilization not equal to user satisfaction

  20. Results – Video • No change in user satisfaction, significant power savings

  21. Results – Java • Same user satisfaction, same power savings • Red: Users gave high ratings to lower frequencies • Dashed Black: Neural network bad

  22. Results – Shockwave • Lowered user satisfaction, improved power • Blue: Gave constant ratings during training

  23. Energy-Satisfaction Product • Slight increase in ESP • Benefits in energy reduction outweigh loss in user satisfaction with ESP

  24. Conclusion • We explore user satisfaction relative to actual hardware performance • Show correlation from HPCs to user satisfaction for interactive applications • Show that user satisfaction is generally non-linear, application-, and user-specific • Demonstrate an example for leveraging user satisfaction to improve power consumption over 25%

  25. Thank you • Questions? • For more information, please visit: • http://www.empathicsystems.org

More Related