140 likes | 255 Views
This paper discusses the challenges and methodologies of tool benchmarking within semiconductor research. It highlights the importance of creating reliable metrics and benchmarks for assessing productivity, predicting design time, and monitoring flow performance. The text critiques current practices that yield inconclusive and non-repeatable results, advocating for better experimental design and controlled equivalence classes. By exploring various tool comparison methods and addressing inherent issues, it aims to enhance the effectiveness of benchmarking tools in circuit design and development.
E N D
Tool BenchmarkingWhere are we? Justin E. Harlow III Semiconductor Research Corporation April 9, 2001
Metrics and Benchmarks:A Proposed Taxonomy • Methodology Benchmarking • Assessment of productivity • Prediction of design time • Monitoring of throughput • Flow Calibration and Tuning • Monitor active tool and flow performance • Correlate performance with adjustable parameters • Estimate settings for future runs • Tool Benchmarking • Measure tool performance against a standard • Compare performance of tools against each other • Measure progress in algorithm development
My Tool Your Tool How It’s Typically Done... The Job
Predictive Value?Kind of…. • It takes more time to detect more faults • But sometimes it doesn’t...
Bigger Benchmarks Take Longer • Sometimes... S526:451 detects, 1740 sec S641:404 detects, 2 sec
What’s Wrong with the way we do it today? • Results are not predictive • Results are often not repeatable • Benchmark sets have unknown properties • Comparisons are inconclusive
A Better Way?Design of Experiments • Critical properties of equivalence class: • “sufficient” uniformity • “sufficient” size to allow for t-test or similar
Example: Tool Comparison • Scalable circuits with known complexity properties • Observed differences are statistically significant
Canonical Reference on DoE Tool Benchmark Methodology • D. Ghosh. Generation of Tightly Controlled Equivalence Classes for Experimental Design of Heuristics for Graph-Based NP-hard Problems. PhD thesis, Electrical and Computer Engineering, North Carolina State University, Raleigh, N.C., May 2000. Also available at http://www.cbl.ncsu.edu/publications/#2000-Thesis-PhD-Ghosh.
Tool Benchmark Sets • ISCAS 85, 89, MCNC workshops etc. • ISPD98 Circuit Partitioning Benchmarks • ITC Benchmarks • Texas Formal Verification Benchmarks • NCSU Collaborative Benchmarking Lab
“Large Design Examples” • CMU DSP Vertical Benchmark project. • The Manchester STEED Project • The Hamburg VHDL Archive • Wolfgang Mueller's VHDL collection • Sun Microsystems Community Source program • OpenCores.org • Free Model Foundry • ….
Summary • There are a lot of different activities that we loosely call “benchmarking” • At the tool level, we don’t do a very good job • Better methods are emerging, but • Good Experimental Design is a LOT of work • You have to deeply understand the properties that are important and design the experimental data • Most of the design examples out there are not of much use for tool benchmarking
To Find Out More... Advanced Benchmark Web Site http://www.eda.org/benchmrk Nope… There’s no “a” in there Talk to Steve “8.3” Grout