1 / 19

Test Selection and Augmentation of Regression System Tests for Security Policy Evolution

JeeHyun Hwang, Tao Xie , and collaborators at Univeristy of Luxembourg. Test Selection and Augmentation of Regression System Tests for Security Policy Evolution. Access Control. Access control is one of the most widely used privacy and security mechanisms

merv
Download Presentation

Test Selection and Augmentation of Regression System Tests for Security Policy Evolution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JeeHyunHwang, Tao Xie, and collaborators at Univeristy of Luxembourg Test Selection and Augmentation of Regression System Tests for Security Policy Evolution

  2. Access Control • Access control is one of the most widely used privacy and security mechanisms • used to control which principals (e.g., users or processes) have access to which resources • Access control is often governed by security policies called Access Control Policies (ACP) • Security policies are often specified and maintained separately from application code

  3. Motivation • Security requirements change over times -> Security policies are often evolved • Security policy changes may introduce security faults (e.g., unauthorized access) • System developers execute system test cases to ensure that behavior changes (introduced by security policy changes) are expected

  4. Problem • Two pitfalls of executing all of existing system test cases • Executing all of existing system test cases is time consuming • Existing system test cases may not expose behavior changes sufficiently induced by security policy changes • There are no existing approaches for testing applications effectively in the context of security policy evolution

  5. Our Goal • Regression system test cases for policy evolution • Select and execute only system test cases (from an existing test suite), which expose behavior changes • Augment system test cases to expose behavior changes (which are not exposed with existing system tests)

  6. Challenges • Select and augment regressionsystem test cases impacted by policy changes with low false-positives and false-negatives • require to analyze effects correctly of policy changes • require to monitor interactions correctly between system test cases and security policies

  7. Definition: Coverage • Coverage for security policies • measure which rules of the policy are involved (called “covered”) in policy evaluation [Martin et al. WWW 07]

  8. Test Selection Technique I • Find system test cases impacted for policy changes by mutation analysis [Setup: rule-test correlation] • Policy P and its mutant Pm by changing rule ri’s decision (e.g., Permit -> Deny) • Execute system test cases T (for P and Pm) • Correlate ri with tests Timp (TimpЄT) which expose different behaviors • Continue until we find each rule’s correlated system test cases in turn

  9. Test Selection Technique I - cont [Test selection for policy changes] • Find rules R impacted by policy changes • Select system test cases correlated with a rule rЄ R Cost: given nrules in P , we need to execute Tfor 2*n times. However, we are enabled to conduct setup process prior to encountering policy changes.

  10. Test Selection Technique II • Find system test cases impacted for policy changes by analyzing which rules are evaluated (i.e., covered) [Setup: rule-test correlation] • Execute systems test cases T • Detect which rules rs are evaluated for each system test case Timp • Correlate a rule r with its corresponding system test cases

  11. Test Selection Technique II [Test selection for policy changes] • Find rules R impacted by policy changes • Select system test cases correlated with a rule rЄ R Cost: given n rules in P , we need to execute Tonce. However, we are enabled to conduct setup process prior to encountering policy changes.

  12. Test Selection Techniques III • Find system test cases impacted for policy changes by recording and evaluating requests [Setup: request collection] • Record all requests issued to policy decision point (PDP) for each system test case

  13. Test Selection Techniques III - cont [Test selection for policy changes] • Select requests (with corresponding system test cases) to evaluate different decisions for two different policy versions Cost: given n rules, we need to execute all of system test cases for only once.

  14. Test Augmentation Technique • Augment system test cases for policy evolution • Collect request-response pairs qs, which expose different policy behaviors • Select only pairs qsi(qsiСqs) , which are not exposed with T • Find system test cases to issue requests in high similarity with qsi • Manually modify system test cases to issue a request q(qЄqsi)

  15. Evaluation Subjects A collection of Java programs interacting with security policies

  16. Research Questions • RQ1: How effectively our proposed techniques select system test cases with policy changes? • Precision and recall • Cost of each technique: elapsed time for execution and the number of test runs • RQ2: How effectively our test augmentation technique suggests system test cases (which expose policy behavior differences) while existing system test cases cannot expose such differences? • Precision and recall

  17. Open Questions • How to correlate unit test cases with each changed location? • Our techniques are sound assuming when we apply rule decision change mutation • For rule addition/deletion, we may correlate unit test cases to a default-fall-through rule or non-applicable cases • If we consider other types of mutants (e.g., rule combination), it would be challenging

  18. Open Questions – cont’ • How to partition of difference-exposing policy unit test cases produced by Margrave • For OrBAC, each rule is evaluated by only one request. I think that each request represents one category. (I need to synthesize outcome of Margrave to find all possible requests). • In general, a XACML policy may include rules to be applicable for more than one request, we may categorize requests based on covering rules. Consider that req1 and req2 cover rule 1. We classify these two requests into the same category.

More Related