1 / 21

Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan

Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan. http://www.cs.utah.edu/formal_verification. Correctness Concerns Will Loom Everywhere… Debug Concurrent Systems, providing rigorous guarantees. How is system correctness established?.

Download Presentation

Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cross-Cutting SeminarVerification in the 10N thread regimeGaneshGopalakrishnan http://www.cs.utah.edu/formal_verification

  2. Correctness Concerns Will Loom Everywhere…Debug Concurrent Systems, providing rigorous guarantees

  3. How is system correctness established? • Certify a system to be correct without attempting to create all real-world conditions • Airplanes • Can’t simulate all stress, turbulence • Hence mathematically model, and analyze • Over-engineer • Software / Hardware : The old way • Attempt (in vain) at what was proscribed • Time-out and “ship it” based on ad-hoc / monetary criteria • The new way • Do real engineering – i.e. really attempt to mathematically analyze and predict (coverage metrics, formal methods) • Over-engineering also an option • Redundant cores, Fault Tolerance Algorithms, …

  4. Verification of Large-scale Concurrent Systems • Must conquer the exponentials in the state space of a concurrent program / system • Data space is exponential • Symmetry space is exp • Interleaving space is exp

  5. Conquering the exponentials (practice) • Data space • Find data that does not affect control • Often possible to guess data that does not influence control • Static Analysis can help • Symmetry space • Find symmetry reduction arguments • Often possible to guess the instance to model • Example: Three Dining Philosophers • Symbolic Analysis can help • e.g. symbolic MPI ranks • Interleaving space • Employ action independence • Perhaps THE most non-intuitive of spaces • Designers find it difficult to guess which interleavings to ignore • Hence need automation in this space • Partial Order Reduction methods

  6. Challenge: exponential interleavings P0 P1 P2 P3 P4 TOTAL > 10 Billion Interleavings !! A B Only these 2 are RELEVANT!!! Dependent actions

  7. Focal Areas for Correctness Research

  8. Focal Areas • Hardware • Pre-Silicon • Post-Silicon • Firmware • Microcode on chip • Microcode in subsystems (bus bridges, I/O, …) • Software • User apps • APIs and libraries • Compilation • Runtime support (OS, work-stealing algorithms, …)

  9. Where Post-Si Verification fitsin the Hardware Verification Flow Specification Validation Design Verification Testing for Fabrication Faults Post-Silicon Verification Spec product Pre-manufacture Post-manufacture Does functionality match designed behavior?

  10. Focal Areas : Hardware • Hardware • Pre-Silicon • Logical bugs must be caught through • Systematic testing • Formal analysis (i7 core was FV-ed in lieu of testing) • Post-Silicon • Fresh logical bugs are triggered by • high-frequency, and • integrated operation • Must detect them through • Systematic testing • Limited ObservabilityTesting • Built-in support for post-silicon debugging • e.g. “backspace queues” • staggered clock-phase cores

  11. Focal Areas : Hardware • Hardware • Pre-Silicon • When it works, Formal Verification rocks! • Gold-star result: • i7 core execution engine was FV-ed in lieu of testing !! Forthcoming CAV 2009 paper: RoopeKaivola, RajnishGhughal, NarenNarasimhan, Amber Telfer, Jesse Whittemore, SudhindraPandav, Anna Slobodova, Christopher Taylor, Vladimir Frolov, Erik Reeber and ArmaghanNaik, “Replacing testing with formal verification in Intel Core i7 processor execution engine validation” (Utah Alums are in red)

  12. x a c b y d Post-Silicon Challenge : Limited Observability! a x c d y b …

  13. Focal Areas : Firmware • Firmware • Microcode on chip • Huge amounts of late-binding microcode on chip • Path analysis of microcode is a HUGE problem • Industry is heavily investing into formal methods • Must encrypt • Must have sufficient “planned wiggle-room” • Microcode in subsystems (bus bridges, I/O, …) • Crucial for overall system integrity

  14. Focal Areas : Software • Software • User apps • APIs and libraries • Compilation • Runtime support (OS, work-stealing algorithms, …) • Dynamic Verification offers considerable hope • MODIST (dynamic verif. of distributed systems) • Backtrackable VMs • Testing using FPGA hardware (Simics) • CHESS project of Microsoft Research (for threads) • Local projects • ISP (for MPI) • Inspect (for threading)

  15. Workflow of a dynamic verifier Executable Proc1 Proc2 …… Procn Scheduler Program or FPGA emulation or VM Run • Hijack scheduler Interposition Layer Runtime • Playout • Relevant Interleavings

  16. Focal Areas : Software • Software • Compilation • So many focal areas just here • Correct compilation respecting API / library semantics • Correctness with respect to weak memory orderings • Interaction of compiled code with intelligent runtime • Failure handling • What must be done when a core says “bye bye” ?

  17. Correctness Myths in the Multi-core Era • Myth: Simpler cores -> easier to verify • Reality: Circuits will be highly energy optimized • Circuit-levelverification • Verification of power-down / power-up protocols • Myth: Streaming models have no “rats nest” control • Reality: Proper use of streaming models will re-introduce reactive / control complexity • Myth: Single programming paradigms simplify things • Reality: Performance will force multi-paradigm programming • Road-runner uses MPI and IBM Cell • Mixed programming paradigms make verification tricky • Myth: More cores will allow parallel verification • Reality: Superior abstraction methods often outperform any brute-force

  18. How about the performance space?

  19. Performance / reliability bugs • Not meeting energy budgets • “Feature interactions” that occur only at scale • Inability to gauge efficacy of parallelization • For each context, decide whether parallelization pays off

  20. Concluding Remarks (1) • Correctness is an enabler of performance! • Safety through timidity never worked • People will seek performance eventually • Provide them strong safety-nets • Incrementally re-verify performance optimizations • Plan for correctness • Correctness cannot be left to chance • Not an after-thought • Formal Correctness WILL sell and pay for itself

  21. Concluding Remarks (2) • Standardization is very important! • How to standardize? Two options • One minimal API with a few high-level functions • Pros: • Easier to understand and verify • Cons: • Ensuring portable performance could be difficult • So people will learn to bypass and hack around them • One very broad API that exposes a LOT of low level • Cons: • Steeper learning curve • Pros: • Has a much better chance to succeed– example : MPI • Formal Methods can help mitigate burden of learning / use • Example: Our ISP push-button verifier of MPI programs • Recently solved many large apps • Recently worked out most examples from Pacheco’s book

More Related