1 / 24

E xploiting Transition Locality in the Disk Based Mur phi Verifier

E xploiting Transition Locality in the Disk Based Mur phi Verifier. Giuseppe Della Penna, Benedetto Intrigila Universita’ di L’Aquila Enrico Tronci , Marisa Venturini Zilli Universita’ di Roma “La Sapienza”. FMCAD 2002 Nov 6-8, 2002 Portland, Oregon, USA. Results (Zoom 1).

Download Presentation

E xploiting Transition Locality in the Disk Based Mur phi Verifier

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploiting Transition Locality in the Disk Based Murphi Verifier Giuseppe Della Penna, Benedetto Intrigila Universita’ di L’Aquila Enrico Tronci, Marisa Venturini Zilli Universita’ di Roma “La Sapienza” FMCAD 2002 Nov 6-8, 2002 Portland, Oregon, USA

  2. Results (Zoom 1) We present a disk based algorithm to delay State Explosion when using Explicit State Space Exploration to verify protocols or software like systems.

  3. Results (Zoom 2) • We present a disk based verification algorithm that exploits transition locality to decrease disk read accesses thus reducing the time overhead due to disk usage. • We present an implementation of our algorithm within the Murphi verifier. • Our experimental results show that even using 1/10 of the RAM needed to complete verification our disk based algorithm is (on average) only 3 times slower than RAM Murphi with enough RAM to complete the verification task at hand. • Using our disk based Murphi we were able to complete verification of a protocol with about 109 reachable states. This would require more than 5 gigabytes of RAM using RAM Murphi.

  4. K-transition iff level(s’) – level(s) = K Locality Transition k-local iff |level(s’) – level(s)| <= k -4 1 -2 1 0 1 0 -1 -1 1 0 1 1 -1 0 1 2 3 4

  5. Locality Our experimental results show that: For all protocol like systems, for most states, most transitions (typically more than 75%) are 1-local.

  6. Exploiting Locality We can exploit locality to reduce disk read accesses in a disk based BFS Explicit State Space Exploration.

  7. We only use some of the disk table blocks to remove old state signatures from M and to remove old states from Q_unck. Idea h M (recently visited states) Old + New Insert() (s, h) /* Global Variables */ hash table M; file D; FIFO queue Q_ck; FIFO queue Q_unck; int disk_cloud_size; /* number of blocks to be read from file D */ Q_ck New (+ old) BFS front Q_unck New + old States candidates to be on next BFS level RAM Search(){M = empty; D = empty; Q_ck = empty; Q_unck = empty; for each startstate s {Insert(s);} do /* search loop */{ while (Q_ck is not empty){ s = dequeue(Q_ck); for all s' in successors(s) {Insert(s');} } Checktable(); } while (Q_ck is not empty);} Checktable() Use D to filter out old states DISK D : Disk table with visited states Old States

  8. Insert() Insert(state s) { h = hash(s); /* compute signature of state s */ if (h is not in M) { insert h in M; enqueue((s, h), Q_unck); if (M is full) Checktable(); } }

  9. Chektable() Checktable() /* old/new check for main memory table */{ deleted_in_cloud = 0; /*num states deleted from M in disk cloud*/ deleted_not_in_cloud = 0;/*num states deleted from M on disk and not in disk cloud*/ DiskCloud = GetDiskCloud(); /* Randomly choose indexes of disk blocks to read (disk cloud) */ if (there exists a disk block not selected in DiskCloud) something_not_in_cloud = true; /* there exists a state on disk that is not in the disk cloud */ else something_not_in_cloud = false; Calibration_Required = QueryCalibration(); for each Block in D { if (Block is in DiskCloud or Calibration_Required) { for all state signatures h in Block { if (h is in M) { remove h from M; if (Block is in DiskCloud) { deleted_in_cloud++; } else /* Block is not in DiskCloud */ {deleted_not_in_cloud++; }}}}}

  10. Chektable() (continued) /* M now has only new states, … almost … because of D random sampling */ /* remove old states from state queue and add new states to disk */ while (Q_unck is not empty) { (s, h) = dequeue(Q_unck); if (h is in M) {append h to D; remove h from M; enqueue(Q_ck, s);}} remove all entries from M;/* clean up the hash table */ /* adjust as needed disk cloud size (i.e. number of disk table blocks used to remove old states) */ if (Calibration_Required) { if ( (there exists a state on disk that is not in the disk cloud) && ( there exists a state in M that is in the disk cloud or is on disk) ) {Calibrate(deleted_in_cloud,deleted_not_in_cloud);} if (disk access rate has been too long above a given critical limit) {reset disk cloud size to its initial value with given probability P;} } } /* Checktable() */

  11. GetDiskcloud() GetDiskCloud(){ Randomly select disk_cloud_size blocks from diskaccording to given probability; Return the indexes of the selected blocks; }

  12. Standard BF visit

  13. CBF n_peterson (-b)

  14. CBF eadash (-b)

  15. CBF sci (-b)

  16. CBF Kerberos (-b)

  17. CBF with bit compression (-b)

  18. CBF n_peterson (-b –c)

  19. CBF eadash (-b –c)

  20. CBF sci (-b –c)

  21. CBF Kerberos (-b –c)

  22. CBF bit comp., hash compaction

  23. CBF ns (-b –c) NumInitators = 2, NumResponders = 1, NumIntruders = 2, NetworkSize = 2, MaxKnowledge = 10

  24. Conclusions • Protocols exhibit transition locality. • A Cache based Breadth First (CBF) search can exploit locality within the Murphi verifier. • CBF is compatible with all Murphi optimizations. • W.r.t. a hash table based approach, CBF typically allows verification of systems more than 40% larger with a time overhead of about 100%. • Future work: a NOW as well as a HD could be used to implement our cache

More Related