1 / 6

Implications of Emerging Hardware

Implications of Emerging Hardware. Tom Wenisch (University of Michigan) Nikos Hardavellas (Northwestern University) Sangyeun Cho (University of Pittsburgh) Kirk Pruhs (University of Pittsburgh) Phillip Gibbons (Intel Labs) Stavros Harizopoulos (HP Labs)

adonica
Download Presentation

Implications of Emerging Hardware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implications of Emerging Hardware Tom Wenisch (University of Michigan) Nikos Hardavellas (Northwestern University) Sangyeun Cho (University of Pittsburgh) Kirk Pruhs (University of Pittsburgh) Phillip Gibbons (Intel Labs) Stavros Harizopoulos (HP Labs) Spiros Papadimitriou (Google Research) Ashwin Kumar Kayyoor (University of Maryland) Xiaorui Wang (University of Tennessee)

  2. Emerging Memory Technologies • Observation • Existing buffer pool and storage mgmt optimized for slow disk / volatile DRAM • Emerging memories change power/energy/reliability trade-offs • e.g., access latency, energy per access, access granularity, wearout, non-volatility • Action: Rethink data management to exploit these devices • e.g., new index structures (access granularity) • e.g., new recovery mechanisms (non-volatility) • e.g., jointly optimize for energy & performance (energy/access) • e.g., new query processing strategies (access latency)

  3. Data Placement • Observation: “Not all memory addresses are created equal” • Memory/storage is major/growing piece of power breakdown • Devices and power modes create heterogeneity • Power management is exposed to software • However, current practice is oblivious to hardware power knobs • Where data is placed within and across nodes impacts efficiency • Action: Proactively place data to optimize for energy • e.g., consider moving computation to the data • e.g., cluster data with similar locality to enable power down • e.g., consider trading compute for storage (compression)

  4. Specialization • Observation • Hardware specialization provides greatest leverage for efficiency • Hardware is moving towards specialization • In the chip (dark silicon) • In the system (e.g., GPUs) • In the cluster (e.g., wimpy nodes) • More examples: mobile CPUs, embedded cores, GPUs, SIMD engines, vector units, reconfigurable hardware, wimpy nodes, etc. • Action: Software should influence how hardware specializes • Identify important specializations (in particular for energy) • Action: Software should embrace specialized hardware • Devise techniques to map/migrate/schedule tasks at correct grain

  5. QoS Resiliency • Observation • Data processing tasks have variable QoS demands • with respect to latency / throughput • with respect to data quality • Hardware knobs can trade QoS for energy efficiency • However, these knobs are susceptible to QoS and efficiency cliffs • Action: Carefully trade QoS for energy • Need to design interfaces to express QoS objectives • Optimizers must tune HW knobs to lowest power that meets QoS • However, algorithms must provide robust QoS in the face of unexpectedly & rapidly changing demand

  6. Energy-Constrained Data Mgmt • Observation • Proliferation of high-capability energy-constrained devices (smart phones) • Local computation & communication cost energy • Action: Find energy-minimal client + cloud solutions • Partition data storage & processing between “client” and “cloud”

More Related