Erik Jonsson School of Engineering and Computer Science
This presentation is the property of its rightful owner.
Sponsored Links
1 / 35

Erik Jonsson School of Engineering and Computer Science PowerPoint PPT Presentation


  • 72 Views
  • Uploaded on
  • Presentation posted in: General

Erik Jonsson School of Engineering and Computer Science. A Model-Checking In-lined Reference Monitoring system for ActionScript. Meera Sridhar and Kevin W. Hamlen (The University of Texas at Dallas) Supported in part by grants from AFOSR Presentation at Adobe Systems Inc. San Francisco, CA

Download Presentation

Erik Jonsson School of Engineering and Computer Science

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Erik jonsson school of engineering and computer science

Erik Jonsson School of Engineering and Computer Science

A Model-Checking In-lined Reference Monitoring system for ActionScript

Meera Sridhar and Kevin W. Hamlen

(The University of Texas at Dallas)

Supported in part by grants from AFOSR

Presentation at Adobe Systems Inc.

San Francisco, CA

November 19, 2009

FEARLESS Engineering

www.utdallas.edu


Beyond access control

Beyond Access Control

  • Security Policy – A security policy for a system defines desired “secure” behavior for a system. More formally, consider a computer system to be a finite-state automaton, with a set of transition functions that change state. Then, a security policy is a statement that partitions the states of the system into a set of authorized, or secure, states and a set of unauthorized, or nonsecure, states.

    - Computer Security, Art and Science by Matt Bishop


Trustworthy small

Trustworthy = Small

  • 2. Trusted Computing Base (TCB) – a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.

    - Lampson et al, Authentication in Distributed Systems

  • An important goal in computer security is to minimize the TCB.


Separation of concerns

Separation of Concerns

  • 3. code-producer and code-consumer

    • will use this terminology frequently

    • code-producer – the party that wrote the code

      • note that this could be different from the code-distributor, for example, in AIR, Adobe is considered the code-distributor, but the code-producer can be any person who chooses to create an AIR application and release it to the public.

    • code-consumer – the end-user who will run the application


The need for more sophisticated security enforcement tools for flash and air

The need for more sophisticated security enforcement tools for Flash and AIR

  • Existing security mechanisms for ActionScript and related technologies (including the AVM, Flash, Flex and AIR) mainly fall into two categories:

    • Code-signing

    • Sandboxing

  • Suffice for certain classes of attacks (mainly access control), BUT:

    • code-signing

      • places the code-producer in the TCB

      • consumer can either choose to trust the code-producer and run the program or choose not to trust the code-producer – there is no in-between

    • sandboxing

      • enforces only a small class of coarse-grained access control policies

      • policies are built into the AVM and runtime libraries.


The need for more sophisticated security enforcement tools for flash and air contd

The need for more sophisticated security enforcement tools for Flash and AIR, contd.

  • Limitations:

  • cannot enforce system- and application-specific policies

    • e.g.prohibiting write access to files with only certain file extensions

  • cannot enforce finer-grained policies

    • e.g. those that constrain arguments to individual ActionScript instructions (such as arguments to eval)

    • programs cannot use file system – too coarse grained

  • policy is encoded in the underlying OS/VM, so

    • code-producer needs to know the policy – policy cannot be modified/specified by code-consumer after receiving the code

      • e.g. AIR force wants to enforce policies that they can’t even talk about

    • changes to policy would need to be reflected in the OS/VM by modifying it – expensive, inflexible


Enter irms what are they

Enter IRMs: What are they?

  • In-Lined Reference Monitors (IRM’s)

    • inject runtime security guards directly into untrusted binaries

    • guards test whether an impending operation constitutes a policy violation

    • if so, some corrective action is taken to prevent the violation, such as premature termination.

    • result is self-monitoring code that can be safely executed without external monitoring


Irms contd

IRMs contd.

IRMs

  • keep track of history of security-relevant events observed

  • enables enforcement of a powerful class of security policies

    (not precisely enforceable by any purely static analysis)

  • do not require any admin/VM-level privileges to enforce policy

    IRMs also

  • allow code consumers to specify or modify the security policy after receiving the code

    • code-producer need not know the policy, which is the case with static analyses/code-signing/sandboxing

  • changes to policy doesn’t require modifications to underlying OS/VM


A very simple example

A Very Simple Example

Let’s say the desired security policy is the following:

The program should not open more than 3 pop-up windows.


A very simple example contd

A Very Simple Example, contd.

public class RUnsafeURL extends Sprite{

public function RUnsafeURL(){

//create textfield object here

var theField:TextField = new TextField();

theField.text = "Click me!";

theField.border = true;

theField.background = true;

theField.selectable = false;

addChild(theField);

//add event listener here

theField.addEventListener(MouseEvent.CLICK, clickListener);

}//constructor RUnsafeURL

private function clickListener(e:MouseEvent):void { var url:String="javascript:window.open(

http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos ','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar=no,05.status=no,scrollbars=no');void(0);";

while(true)

{

navigateToURL(new

URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos")); }

}//end method clickListener

}//end class definition


A very simple example contd1

A Very Simple Example, contd.

public class RUnsafeURL extends Sprite{

private var security:Number = 0;

public function RUnsafeURL(){

//create textfield object here

var theField:TextField = new TextField();

theField.text = "Click me!";

theField.border = true;

theField.background = true;

theField.selectable = false;

addChild(theField);

//add event listener here

theField.addEventListener(MouseEvent.CLICK, clickListener);

}//constructor RUnsafeURL

private function clickListener(e:MouseEvent):void { var url:String="javascript:window.open(

‘http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos ','title',04.'width=800,height=600,toolbar=no,resizable=o,menubar=no,05.status=no,scrollbars=no');void(0);";

while(true)

{

if (security > 3)

HALT;

navigateToURL(new URLRequest("http://www.youtube.com/watch?v=pXT9WSaH8uw&feature=topvideos"));

security++;

}

}//end method clickListener

}//end class definition


Binary level irm architecture

Binary-Level IRM Architecture

Original SWF File

Safe SWF File

ABC Injector

Original SWF Data

ABCExtractor

Instrumented ABC File

ABC File

Parser

Code Generator

Abstract Syntax Tree (AST)

Rewriter

Instrumented AST

Security Policy


Erik jonsson school of engineering and computer science

For more information on what I’ve presented so far, please refer to the following paper:

M. Sridhar and K.W. Hamlen. ActionScript In-Lined Reference Monitoring in Prolog. In Proc. Intl. Symposium on Practical Aspects of Declarative Languages, 2010. to appear.


Two important concerns

Two Important Concerns

  • Does the IRM preserve good code?

    • inserted guards have no visible side affect, unless the code is policy-violating

    • depends on strictly one defines “ preserve”

      • not applicable to self-modifying code for example

    • best determined through testing

    • fairly long history of doing this (may not always be known as IRM)

      • Java compiler for example inserts array bounds check before any array access

      • user doesn’t notice any difference, unless there is a array out-of-bounds violation

  • More important concern – how do we know that the rewriter prevents policy violations?

    • testing not good enough, since attacker is actively trying to find loop-holes in the code

    • needs rigorous proving


The need for verification

The need for verification

  • we would formally verify the rewriter

  • also, want to minimize the TCB

    • rewriters are powerful, but therefore complex.

    • we generally have a rewriter tailored to a particular security policy class

      • so, the combined size of all the rewriters can be large

    • rewriters employ many optimization schemes, so any change to the optimization means a change to the TCB

      • not desirable, since we want to keep the TCB fairly stable


The need for verification contd

The need for verification, contd.

  • a verifier (separate, smaller program) checks the rewritten program against the security policy, and

  • either

    • deems the program to be policy-adherent (accepts), or

    • deems it to be policy-violating (rejects), and produces a counterexample – a sequence of events that led up to the security policy being violated in a certain program state

  • can take the rewriter out of the TCB

  • typically, a good verifier is general and less subject to change, and the verifier code is the same across security policy classes

  • therefore, much more desirable to have the verifier rather than the rewriter in the TCB


Verifying irm architecture

Verifying IRM Architecture

Original SWF File

Verifier

ABCExtractor

ABC Injector

Instrumented ABC File

Parser

ABC File

AST

IRM Framework

Parser

Reject + Counter-example

Model-Checker + Abstract Interpreter

Abstract Syntax Tree (AST)

Security Policy

Accept + Verified ABC File

Binary Rewriters

Original SWF Data

R1

R2

Rn

Instrumented AST

Safe SWF File

Code Generator

Trusted Computing Base


Using model checking for verification

Using Model-Checking for Verification

  • Model-checking is an extremely powerful software verification paradigm that can verify properties

    • more complex than those typically expressible by type-systems

      • ActionScript already has some basic type-checking

      • while type-checking suffices to prove policies such as ensuring the argument to a jump is indeed a label, does not prove policies like “do not open more than 3 pop-up windows”

    • more semantically flexible and abstract than those typically encoded by contracts

      • contracts tend to expect strict syntactic- adherence

      • not flexible for various rewriting-strategies


What is model checking

What is model-checking?

  • Informally, given a model of a system (in this case, the ActionScript program), and a safety property, a model-checker explores all possible states that the system can be in, checking whether the property is violated in any state.

  • If no violations occur, then the model-checker accepts.

  • If in a particular state the property is violated, then the model-checker produces the sequence of states that led up to the violating state.

  • clearly suited for verification

  • main challenge with model-checking – state space explosion – since the MC explores each possible state, it quickly runs out of memory, especially if there are loops

    • tackling this for IRMs has been one of the main focuses of our research


Key elements of the verifier

Key Elements of the Verifier

  • security policy is modeled as a security automaton

    • policy-writer uses a high-level policy language

      • can be anything from check-boxes to Linear Temporal Logic

    • high-level policy is reduced to an automaton

      • this idea is obtained from literature

  • abstractly interprets untrusted programs, reasoning about possible event histories


Key elements of the verifier1

Key Elements of the Verifier

  • uses memoization and co-induction to reason about fixed points

    • suppose sec-relevant property; no iteration changes whether property is true or false

    • the infinite number of states can be replaced with a single, representative state

    • needed for termination of the model-checker

      For more information on this aspect specifically, please see:

      B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar. ActionScript Bytecode Verification with Co-logic Programming. In Proc. of the ACM SIGPLAN Workshop on Prog. Languages and Analysis for Security (PLAS), 2009.


Key elements of the verifier contd

Key Elements of the Verifier, contd.

  • another way to combat state space explosion is to use abstract interpretation

    • when certain properties of variable values, rather than their concrete values suffice for reasoning about policy-adherence, concrete variable values can be replaced with abstract values from an abstract domain

    • can greatly reduce state space, since several concrete states can be mapped onto a single abstract state


Key elements of the verifier contd1

Key Elements of the Verifier, contd.

Abstract Interpretation, contd.

  • domain is constructed based on knowledge of properties crucial for interpretation

  • of course, one has to take care that this abstraction preserves soundness, but also,

  • the precision of this abstraction determines the power of our analysis

    • it is possible to create an abstraction that is correct, but abstracts away too much information – this results in a high conservative rejection rate – that is, no policy-violating programs are accepted, but many policy-adherent programs are conservatively rejected.


Key elements of the verifier contd2

Key Elements of the Verifier, contd.

  • model-checking rewritten code requires abstract interpretation of

    • not only program variable values, but also

    • history of sec-relevant events

  • uses a sophisticated abstract interpretation scheme

    • by creating an abstract interpretation lattice built from the security policy automaton


Key elements of the verifier contd3

Key Elements of the Verifier, contd.

  • the model-checking step is encoded in the operational semantics of the abstract machine, where policy-violations are modeled as stuck states

  • thus, if abstract machine reaches a stuck state, program is rejected as policy-violating


Key elements of the verifier contd4

Key Elements of the Verifier, contd.

  • We have proved that our method is sound – that is, adopting the proposed abstract interpretation scheme does not result in any policy-violating program being accepted by the model-checker.

  • We have shown convergence - that our model-checker will always terminate in polynomial time.


Erik jonsson school of engineering and computer science

  • Please refer to the following paper for a detailed treatment:

    M. Sridhar and K. W. Hamlen. Model-checking In-lined Reference Monitors. In Proc. Intl. Conf. on Verification, Model-Checking and Abstract Interpretation, 2010. to appear.


Implementation and results

Implementation and Results

  • Rewriting system is a prototype system; not a fully-production level system yet. Model-checking system is a very simple system that can verify these properties. We don’t want a very complex model-checker.

  • ABC Extractor/Injector in C

  • Rest in Prolog (simplifies implementation, but doesn’t have to be in Prolog)

    • Definite Clause Grammars greatly simplify parsing

    • reversible nature of predicates facilitates code-generation as reverse-parsing

    • Prolog already has memoization (tabling) and coinduction

  • Lines of code:

    • binary-rewriters ~ 400 lines of Prolog code per security policy family

    • shared parser/generator ~ 900 lines

    • veriifer ~ 2000 lines of code


Sample policies

Sample Policies

redir policy:

  • prohibits malicious URL-redirections by ABC ad applets

  • redirections are implemented at the bytecode level by navigateToURL system calls

  • The policy requires that method check_url(s) be called to validate destination s before any redirection to s may occur. Method check_url has a trusted implementation provided by the ad distributor and/or web host, and may incorporate dynamic information such as ad hit counts or webpage context.

  • Our IRM enforces this policy by injecting calls to check_urlinto untrusted applets. For better runtime efficiency, it positions some of these calls early in the program's execution (to pre-validate certain URL's) and injects runtime security state variables that avoid potentially expensive duplicate calls by tracking the history of past calls.


Sample policies contd

Sample Policies, contd.

postok policy:

  • sanitizes strings entered into message box widgets

  • This can be helpful in preventing cross-site scripting attacks, privacy violations, and buffer-overflow exploits that affect older versions of the ActionScript VM.

  • We enforced the policy on the Posty AIR application, which allows users to post messages to social networking sites such as Twitter, Jaiku, Tumblr, and Friendfeed.


Sample policies contd1

Sample Policies, contd.

flimit policy

  • enforces a resource bound that disallows the creation of more than n files on the user's machine

  • enforced this policy on the FedEx Desktop AIR application, which continuously monitors a user's shipment status and sends tracking information directly to his or her desktop

  • IRM implements the policy by injecting a counter into the untrusted code that tracks file creations


Experimental results

Experimental Results

All tests were performed on an Intel Pentium Core 2 Duo machine running Yap Prolog v5.1.4.


Erik jonsson school of engineering and computer science

Our papers are available at:

www.utdallas.edu/~meera.sridhar


Bibliography

Bibliography

  • B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore, and M. Sridhar. ActionScript Bytecode Verification with Co-logic Programming. In Proc. of the ACM SIGPLAN Workshop on Prog. Languages and Analysis for Security (PLAS), 2009.

  • K. W. Hamlen, G. Morrisett, and F. B. Schneider. Computability Classes for Enforcement Mechanisms. In ACM Trans. Prog. Languages and Systems, 2006.

  • F. B. Schneider. Enforceable Security Policies. ACM Trans. Information and System Security, 3:30–50, 2000.

  • M. Sridhar and K. W. Hamlen. Model-checking In-lined Reference Monitors. In Proc. Intl. Conf. on Verification, Model-Checking and Abstract Interpretation, 2010. to appear.

  • M. Sridhar and K.W. Hamlen. ActionScript In-Lined Reference Monitoring in Prolog. In Proc. Intl. Symposium on Practical Aspects of Declarative Languages, 2010. to appear.


Acknowledgements

Acknowledgements

  • We thank the Air Force Office of Sponsored Research (AFOSR) for funding part of this research.

  • We thank Dr. Gopal Gupta, Dr. Feliks Kluzniak, and Dr. R. Chandrasekaran from The University of Texas at Dallas for many helpful discussions and feedback.

  • Finally, many thanks to Peleus Uhley here for giving us this opportunity to share our work with you.


  • Login