1 / 24

Mining Likely Properties of Access Control Policies via Association Rule Mining

JeeHyun Hwang 1 , Tao Xie 1 , Vincent Hu 2 and Mine Altunay 3 North Carolina State University 1 National Institute of Standards and Technology 2 Fermi National Laboratory 3 (DBSec 2010). Mining Likely Properties of Access Control Policies via Association Rule Mining.

aine
Download Presentation

Mining Likely Properties of Access Control Policies via Association Rule Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JeeHyun Hwang1, Tao Xie1, Vincent Hu2 and Mine Altunay 3 North Carolina State University1 National Institute of Standards and Technology2 Fermi National Laboratory3 (DBSec 2010) Mining Likely Properties of Access Control Policies via Association Rule Mining

  2. Access Control Mechanism • Access control mechanisms control which subjects (such as users or processes) have access to which resources. Policy Request Response (Permit, Deny, or Not-applicable)

  3. Motivation • Access control policies often include a large number of rules • Misconfiguration and mistakes in access control policies lead to security problems • Need to ensure the correct behaviours of policies • Property verification: check whether properties are satisfied by a policy • Violations of a property expose policy faults • Confidence on policy correctness is dependent on the quality of specified properties

  4. Problem • Properties are often not written in practice • Writing properties is not trivial Our proposed solution: Mine likely properties automatically based on correlations of attribute values.

  5. Solution: Mining Likely Properties • Policy often has similar policy behaviors across attribute values (e.g., faculty and lecturer roles) • Our approach mines likely properties via association rule mining • Lecturer is permitted to conduct actions Faculty member is likely to be permitted to conduct the same actions • Violations of likely properties are deviations of normal policy behaviors • Policy authors need to inspect violations

  6. Outline • Background and Motivation • Likely-Property Templates • Example • Framework • Relation Table Generation • Association Rule Mining • Likely-Property Verification • Evaluation Results • Conclusion

  7. Likely-Property Templates • Implication relation: Likely properties correlate decision (Permit or Deny) dec1 for an attribute value v1with decision dec2 for another attribute value v2 • {Item (v1, dec1)} ) -> {Item (v2, dec2)} • Implication relation types • Subject attribute item sets{Item1 ({TA}, Permit)} ) -> {Item2 ({Faculty}, Permit)} • Action attribute item sets {Item ({Assign}, Permit)}) -> {Item ({View}, Permit)} • Subject-action attribute item sets{Item1 ({TA, Assign}, Permit)} ) -> {Item2 ({Faculty, Assign}, Permit)}

  8. WWW 2007, Banff, Alberta, Canada Example If role = Faculty and resource = (ExternalGrade or InternalGrade) and action = (View or Assign) then Permit If role = TA and resource = (InternalGrade) and action = (View or Assign) then Permit If role = Student and resource = (ExternalGrade) and action = (Receive) then Permit If role = Family and resource = (ExternalGrade) and action = (Receive) then Permit If role = Lecturer and resource = (ExternalGrade or InternalGrade)) and action = (Assign or View) then Permit Deny Faulty Rule = (View or Assign) then Permit Receive is used instead

  9. WWW 2007, Banff, Alberta, Canada Example - cont. Implication relations R2 with at least 65% confidence Implication relations R1 with 100% confidence

  10. Framework

  11. Relation Table Generation • Find all possible request-response pairs in a policy • Generate relation tables (including all request-response pairs) of interest • Input for an association rule mining tool • Example:Relation table for implication relations of action attribute:Row: Subject X ResourceColumn: Action

  12. Association Rule Mining • Given a relation table, find implication relations of attributes via association rule mining • Find three types of implication relations • Report implication relations with confidence values over a given threshold • confidence (X Y)= supp(X ∪ Y)/supp(X) • supp (X) = D / T • - T is #total rows- D is #rows that includes attribute-decision X

  13. Likely Property Verification • Verify a policy with given likely properties and find counterexamples • Inspect to determine whether counterexamples expose a fault • Rationale: counterexamples (which do not satisfy the likely properties) deviate from the policy’s normal behaviors and are special cases for inspection

  14. Basic and Prioritization Techniques • Basic technique: inspect counterexamples in no particular order • Prioritization technique: inspect counterexamples by the order of their fault-detection likelihood • Inspect duplicate counterexamples first • Inspect counterexamples produced from likely properties with fewer counterexamples Prioritization technique designed to reduce inspection effort

  15. Evaluation • RQ1: How higher percentage of faults are detected by our approach compared to an existing related approach [Martin&Xie Policy 2006]? • RQ2: How lower percentage of distinct counterexamples are generated by our approach compared to the existing approach? • RQ3: For cases where a fault in a faulty policy is detected by our approach, how high percentage of distinct counterexamples (for inspection) are reduced by our prioritization?

  16. Metrics • Fault-detection ratio (FR) • Counterexample count (CC) • Counterexample-reduction ratio (CRB) for our approach over the existing approach • Counterexample-reduction ratio (CRP) for the prioritization technique over the basic technique

  17. Evaluation Setup • Seed a policy with faults for synthesizing faulty policies • One fault in each faulty policy for ease of evaluation • Four fault types • Change-Rule Effect (CRE) • Rule-Target True (RTT) • Rule-Target False (RTF) • Removal Rule (RMR) • Compare results of our approach with those of the previous DT approach based on decision tree [Martin&Xie Policy 2006]

  18. 4 XACML Policy Subjects • Real-life access control policies • The number of rules ranges 12-306 rules

  19. Evaluation Results (1/2) FR: Fault-detection ratio CC: Counterexample count CRB: Counterexample-reduction ratio for our approach over DT approach CRP: Counterexample-reduction ratio for the prioritization technique over the basic technique • DT, Basic and Prioritization show averagely 25.9%, 62.3%, and 62.3% fault detection ratios, respectively • Our approach (including Basic and Prioritization techniques) outperform DT in terms of fault-detection capability • Our approach reduced the number of counterexamples by 55.5% over DT • Our approach significantly reduced the number of counterexamples while our approach detected a higher percentage of faults (addressed in RQ1) • Prioritization reduced averagely 38.5% of counterexamples (for inspection) (in Column “% CRP”) over Basic

  20. Evaluation Results (2/2) Fault-detection ratios of faulty policies • Prioritization and Basic achieve the highest fault-detection capability for policies with RTT, RTF, or RMR faults

  21. Conclusion • A new approach that mines likely properties characterizing correlations of policy behaviors w.r.t. attribute values • Verification of the policy against likely properties to inspect whether the policy includes a fault • An evaluation on 4 real-world XACML policies • Our approach achieved >30% higher fault-detection capability than that of the previous related approach based on decision tree • Our approach helped reduce >50% counterexamples for inspection compared to the previous approach

  22. Questions?

  23. Related Work • Assessing quality of policy properties in verification of access control policies [Martin et al. ACSAC 2008] • Inferring access-control policy properties via machine learning [Martin&Xie Policy 2006] • Detecting and resolving policy misconfigurations in access-control systems [Bauer et al. SACMAT 2008]

  24. Discussion

More Related