benchmarking for physical synthesis n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Benchmarking for [Physical] Synthesis PowerPoint Presentation
Download Presentation
Benchmarking for [Physical] Synthesis

Loading in 2 Seconds...

play fullscreen
1 / 28

Benchmarking for [Physical] Synthesis - PowerPoint PPT Presentation


  • 113 Views
  • Uploaded on

Benchmarking for [Physical] Synthesis. Igor Markov and Prabhakar Kudva The Univ. of Michigan / IBM. In This Talk …. Benchmark ing vs benchmarks Benchmarking exposes new research Qs Why industry should care about benchmarking

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Benchmarking for [Physical] Synthesis' - amalie


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
benchmarking for physical synthesis

Benchmarking for [Physical] Synthesis

Igor Markov and Prabhakar Kudva

The Univ. of Michigan/ IBM

in this talk
In This Talk …
  • Benchmarking vs benchmarks
  • Benchmarking exposes new research Qs
  • Why industry should care about benchmarking
  • What is (and is not) being doneto improve benchmarking infrastructure
  • Not in this talk, but in a focus group
    • Incentives for verifying published work
    • How to accelerate a culture change
benchmark ing
Benchmarking
  • Design benchmarks
    • Data model / representation; Instances
  • Objectives (QOR metrics) and constraints
    • Algorithms, methodologies; Implementations
  • Solvers: ditto
  • Empirical and theoretical analyses, e.g.,
    • Hard vs easy benchmarks (regardless of size)
    • Correlation between different objectives
    • Upper / lower bounds for QOR, statistical behavior, etc
  • Dualism between benchmarks and solvers
  • For more details, see http://gigascale.org/bookshelf
industrial benchmarking
Industrial Benchmarking
  • Growing size & complexity of VLSI chips
    • Design objectives
      • Area / power / yield / etc
    • Design constraints
      • Timing / FP + fixed-die partitions / fixed IPs /routability / pin access / signal integrity…
  • Can the same algo excel in all contexts?
  • Sophistication of layout and logic motivate open benchmarking for Synthesis and P&R
design types
Design Types
  • ASICs
    • Lots of fixed I/Os, few macros, millions of standard cells
    • Design densities : 40-80% (IBM)
    • Flat and hierarchical designs
  • SoCs
    • Many more macro blocks, cores
    • Datapaths + control logic
    • Can have very low design densities : < 20%
  • Micro-Processor (P) Random Logic Macros(RLM)
    • Hierarchical partitions are LS+P&R instances (5-30K)
    • High placement densities : 80%-98% (low whitespace)
    • Many fixed I/Os, relatively few standard cells
    • Note: “Partitioning w Terminals”DAC`99, ISPD `99, ASPDAC`00
why invest in benchmarking
Why Invest in Benchmarking
  • Academia
    • Benchmarks can identify / capture new research problems
    • Empirical validation of novel research
    • Open-source tools/BMs can be analyzed and tweaked
  • Industry
    • Evaluation and transfer of academic research
    • Support for executive decisions(which tools are relatively week & must be improved)
    • Open-source tools/BMs can be analyzed and tweaked
  • When is an EDA problem (not) solved?
    • Are there good solver implementations?
    • Can they “solve” existing benchmarks?
participation leadership necessary
Participation / Leadership Necessary
  • Activity 1: Benchmarking platform / flows
  • Activity 2: Establishing common evaluators
    • Static timing analysis
    • Congestion / yield prediction
    • Power estimation
  • Activity 3: Standard-cell libraries
  • Activity 4: Large designs w bells & whistles
  • Activity 5: Automation of benchmarking
activity 1 benchmarking platform
Activity 1: Benchmarking Platform
  • Benchmarking “platform”: a reasonable subset of
    • data model
    • specific data representations (e.g., file formats)
    • access mechanisms (e.g., APIs)
    • reference implementation (e.g., a design database)
    • design examples in compatible formats
  • Base platforms available (next slide)
  • More participation necessary
    • regular discussions
    • additional tasks / features outlined
common methodology platform
Common Methodology Platform

Common Model

(Open Access?)

Synthesis

(SIS, MVSIS…)

Blif  Bookshelf format

Placement

(Capo, Dragon, Feng Shui, mPl,…)

Blue Flow exists, Common model hooks: To be Done

placement utilities
Placement Utilities

http://vlsicad.eecs.umich.edu/BK/PlaceUtils/

  • Accept input in the GSRC Bookshelf format
  • Format converters
    • LEF/DEF  Bookshelf
    • Bookshelf  Kraftwerk (DAC98 BP, E&J)
    • BLIF(SIS)  Bookshelf
  • Evaluators, checkers, postprocessors and plotters
    • Contributions in these categories are welcome
placement utilities cont d
Placement Utilities (cont’d)
  • Wirelength Calculator (HPWL)
    • Independent evaluation of placement results
  • Placement Plotter
    • Saves gnuplot scripts ( .eps, .gif, …)
    • Multiple views (cells only, cells+nets, rows,…)
  • Probabilistic Congestion Maps (Lou et al.)
    • Gnuplot scripts
    • Matlab scripts
      • better graphics, including 3-d fly-by views
    • .xpm files ( .gif, .jpg, .eps, …)
placement utilities cont d1
Placement Utilities (cont’d)
  • Legality checker
  • Simple legalizer
  • Layout Generator
    • Given a netlist, creates a row structure
    • Tunable %whitespace, aspect ratio, etc
  • All available in binaries/PERL at

http://vlsicad.eecs.umich.edu/BK/PlaceUtils/

  • Most source codes are shipped w Capo
activity 2 creating evaluators
Activity 2: Creating Evaluators
  • Contribute measures/analysis tools for:
    • Timing Analysis
    • Congestion/Yield
    • Power
    • Area
    • Noise….
benchmark ing needs for timing opt
Benchmarking Needs for Timing Opt.
  • A common, reusable STA methodology
    • High-quality, open-source infrastructure
    • False paths; realistic gate/delay models
  • Metrics validated against phys. synthesis
    • The simpler the better,but must be good predictors
    • Buffer insertion profoundly impacts layout
      • The use of linear wirelength in timing-driven layout assumes buffers insertion (min-cut vs quadratic)
    • Apparently, synthesis is affected too
vertical benchmarks
Vertical Benchmarks
  • “Tool flow”
    • Two or more EDA tools, chained sequentially(potentially, part of a complete design cycle)
    • Sample contexts: physical synthesis, place & route, retiming followed by sequential verification
  • Vertical benchmarks
    • Multiple, redundant snapshots of a tool flowsufficient info for detailed analysis of tool performance
  • Herman Schmit @CMU is maintaining a resp. slot in the VLSI CAD Bookshelf
    • See http://gigascale.org/bookself
    • Include flat gate-level netlists
    • Library information ( < 250nm)
    • Realistic timing & fixed-die constraints
infrastructure needs
Infrastructure Needs
  • Need common evaluators of delay / power
    • To avoid inconsistent / outdated results
  • Relevant initiatives from Si2
    • OLA (Open Library Architecture)
    • OpenAccess
    • For more info, see http://www.si2.org
  • Still: no reliable public STA tool
  • Sought: OA-based utilities for timing/layout
activity 3 standard cell libraries
Activity 3: Standard-cell Libraries
  • Libraries carry technology information
    • Impact of wirelength delays increases in recent technology generations
    • Cell characteristics must be compatible
  • Some benchmarks in the Bookshelfuse 0.25m and 0.35m libraries
    • Geometry info is there, + timing (in some cases)
  • Cadence test library?
  • Artisan libraries?
  • Use commercial tools to create libraries
    • Prolific, Cadabra,…
activity 4 need new benchmarks to confirm defeat tool tuning
Activity 4: Need New BenchmarksTo Confirm / Defeat Tool Tuning
  • Data on tuning from the ISPD03 paper“Benchmarking for Placement”, Adya et al.
  • Observe that
    • Capo does well on Cadence-Capo, grid-like circuits
    • Dragon does well on IBM-Place (IBM-Dragon)
    • FengShui does well on MCNC benchmarks
    • mPL does well on PEKO
  • This is hardly a coincidence
  • Motivation for more / better benchmarks
  • P.S. Most differences above have been explained,all placers above have been improved
activity 4 large benchmark creation
Activity 4: Large Benchmark Creation
  • www.opencores.org has large designs
    • May be a good starting point –use vendor tools to create blif files(+post results)
    • Note: there may be different ways to convert
  • A group of design houses (IBM, Intel, LSI, HP)is planning a release of new largegate-level benchmarks for layout
    • Probably no logic information
activity 5 benchmarking automation
Activity 5: Benchmarking Automation
  • Rigorous benchmarking is laborious. Risk of errors is high
  • How do we keep things simple / accessible?
  • Encapsulate software management in an ASP
    • Web uploads for binaries and source in tar.gz w Makefiles
    • Web uploads for benchmarks
    • GUI interface for NxM simulations; tables created automatically
    • GUI interface for composing tool-flows; flows can be saved/reused
    • Distributed back-end includes job scheduling
    • Email notification of job completion
    • All files created are available on the Web (permissions & policies)
    • Anyone can re-run / study your experiment or interface with it
follow on action plan
Follow-on Action Plan
  • Looking for volunteers to -test Bookshelf.exe
    • Particularly, in the context of synthesis & verification
    • Contact: Igor imarkov@eecs.umich.edu
  • Create a joint benchmarking groupfrom industry and academia
    • Contact: Prabhakar kudva@us.ibm.com
    • Regular discussions
  • Development basedon common infrastructure