1 / 23

Theoretical Program Checking

Greg Bronevetsky. Theoretical Program Checking. Background. The field of Program Checking is about 13 years old. Pioneered by Manuel Blum, Hal Wasserman, Sampath Kanaan and others. Branch of Theoretical Computer Science that deals with

Download Presentation

Theoretical Program Checking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Greg Bronevetsky Theoretical Program Checking

  2. Background • The field of Program Checking is about 13 years old. • Pioneered by Manuel Blum, Hal Wasserman, Sampath Kanaan and others. • Branch of Theoretical Computer Science that deals with • probabilistic verification of whether a particular implementation solves a given problem. • probabilistic fixing of program errors (if possible).

  3. Simple Checker • A simple checkerC for problem f: • Accepts a pair <x,y> where x is the input and y is the output produced by a given program. • If y=f(x) then return ACCEPT with probability ≥PCElse, return REJECT with probability ≥PC ,Where PC is a constant close to 1. • If the original program runs in time O(T(n)),the checker must run in asymptotically less time o(T(n)).

  4. Simple Checker Example • For example, a simple checker for sorting • Verifies that the sorted list of elements is a permutation of the original one. • Ensures that the sorted elements appear in a non-decreasing order. • Checking is certain, so PC=1. • Runtime of checker = O(n) vs. O(n log n) for the original program.

  5. Complex Checker • A complex checker is just like a simple checker, except that • It is given P, a program that computes the problem f with low probability of error =p. • Time Bound: If the original program runs in time O(T(n)), the complex checker must run in time o(T(n)), counting calls to P as one step.

  6. Self-Corrector • A self-corrector for problem f: • Accepts x, an input to f, along with the program P that computes f with a low probability of error =p. • Outputs the correct value of f(x) with probability ≥PC, where PC is a constant close to 1. • If the original program runs in time O(T(n)), the self-corrector must also run in O(T(n))-time, counting calls to P as one step.(ie. Must remain in the same time class as original program.)

  7. Uses for Checkers • Checkers and Self-Correctors are intended to protect systems against software bugs and failures. • Because of their speed, simple checkers can be run on each input, raising alarms about bad computations. • When alarms are raised a self-corrector can try to fix the output. • Because checkers and self-correctors are written differently from the original program, errors should not be correlated. (ex: SCCM)

  8. Sample Checkers: Equations • A simple checker for programs that compute equations: • A program that claims to solve a given equation can be checked by taking the solutions and plugging them back into the equation. • In fact we can do this any time the program purports to produce objects that satisfy given constraints: just verify the constraints.

  9. Simple Checkers: Integrals • Given a program to compute definite integral, we can check it by approximating the area under the curve by using a small number of rectangles. • Given a formula and a program to compute its integral, we can verify the results by differentiating the output (usually easier than integration). • Also, can pick a random range, and compute the area under the curve using the purported integral vs. using original formula with a lot of rectangles.

  10. Simple Checking Multiplication • Simple Checker for Multiplication of integers and the mantissas of floating point numbers. • Assumption: addition is fast and reliable. • Checking that A•B=C. • Procedure: • Generate a random r • Compute A (mod r) and B (mod r) • Compute [A (mod r) • B (mod r)] (mod r) • Compare result to C (mod r). • Note: [A (mod r) • B (mod r)] (mod r) = A•B (mod r) = C(mod r)

  11. Simple Checking Multiplication • A (mod r) and B (mod r) are O(log n)-bits long.Multiplying them takes O((log n)2)-time. • To get the modulus, need 4 divisions by a small r.Such divisions take O(n log n)-time. • Total checker time = O(n log n). • Most processors compute multiplication in O(n2) time (n-length numbers).

  12. Multiplication Self-Corrector • Self-Corrector for Multiplication • Procedure: • Generate random R1 and R2. • Compute • Note: Above equals • Self-Corrector works by sampling the space of 4 points around A and B and working with them in the case where A•B doesn't work.

  13. How does it work? • Addition, Subtraction and divisions by 2 and 4 assumed to be fast and reliable. • We're dealing with 4 numbers: • In order to operate on them we need to use n+1 bit operations. • All 4 numbers vary over half the n+1 bit range. • Because in each multiplication both numbers are independent, their pair varies over a quarter of the range of pairs of n+1 bit numbers.

  14. How does it work? • The odds of a multiplication failing are =p. • But all the erroneous inputs may occur in our quarter of the range. Thus, the odds of failure become ≤4p. • Thus, the odds that none of the 4 multiplications fail ≤16p.

  15. Other Self-Correctors • Note that a similar self-corrector can be used for matrix multiplication: • Want to compute AB • Generate random matrices R1 and R2 • Compute: • By spreading the multiplications over 4 random matrices, we avoid the problematic input A, B. • If the odds of a matrix multiplication failing =p, then the self-corrector's odds of failure are =4p. • It has been shown that self-correctors can be developed for “Robust functional equations”.

  16. Simple Checker for Division • Trying to perform division: N/Q • Note: N = D•Q + R • R = Remainder • D = some integer • Equivalently: N – R = D•Q • Checking Division reduces to Checking Multiplication!

  17. Self-Corrector for Division • Calculate , where R is random. • The three multiplications can be checked and corrected (if necessary) using aforementioned method. • The one division can also be checked as above. • Note that the one division is unlikely to be faulty: • As R varies over its n-bit range, R•D varies such that a given R maps to ≤ 2 different values of R•D. • If division's odds of failure =p, then the odds of failure for ≤ 2p, which is about as low as p.

  18. Checking Linear Transformations • Given a Linear Transformation A, want to compute • Given input , want to output . • We wish to check computations on floating point numbers, so we'll need to tolerate rounding error. • Let be the error vector . • is definitely correct iff • is definitely incorrect iff

  19. Checking Scheme • Generate 10 random vectors where each ± is chosen positive or negative with 50/50 odds. • Goal: Determine whether or not • For k= 1 to 10 • Calculate • If , REJECT • If all 10 tests are passed, ACCEPT.

  20. Why does this work? • Where each ± is positive or negative with independent 50/50 probability

  21. Why does this work? • Working out the probabilities, we can prove:

  22. Why does this work? • We can also prove the other side:

  23. Conclusion • A number of simple/complex checkers and self-testers have been developed for many problems. • Those presented here are some of the simplest. • Work has been done on finding general types of problems for which efficient testers and self-correctors can be created. • It is not advanced enough to allow automation. • There is some promise in finding techniques to test simple operations that we can use to build up more complex techniques.

More Related