1 / 22

The case for Abstracting Security Policies

The case for Abstracting Security Policies. Richard Sharp Intel Research, Cambridge (UK) 25 th June 2003 Joint work with Anil Madhavapeddy, Alan Mycroft and David Scott (Computer Laboratory, University of Cambridge). talk overview. The (well known) problem of untrusted code

mina
Download Presentation

The case for Abstracting Security Policies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The case for Abstracting Security Policies Richard Sharp Intel Research, Cambridge (UK) 25th June 2003 Joint work with Anil Madhavapeddy, Alan Mycroft and David Scott (Computer Laboratory, University of Cambridge)

  2. talkoverview • The (well known) problem of untrusted code • Our system: making applications accountable • Say what you’re going to do upfront • If it’s reasonable we’ll let you start and we’ll hold you to it! • Stateful Structured Policy Language • Non-determinism • Signals • Function calls • Related work • Conclusions and further work

  3. Untrusted Code • Executing untrusted code is a serious problem • Malicious code is a problem • Badly written code is also a problem! • PKI / digital signatures are only “half a solution”: • Need to check credentials • Need to check an agent only does what is necessary to achieve its task • Existing “suck it and see” model is clearly inappropriate

  4. Abstract ApplicationModels • Applications are often very complicated and very large! • Need some way of specify an application’s “intentions” • but at what level of abstraction do we do this? • At some level the application binary is it’s intentions… • Is this reasonable? Consider real-life analogy… • How simple do we make the models? Clearly many tradeoffs Application Binary Simplified model: High-level intention

  5. Our Framework (1/3) • Augment code with abstract models (cf Schieder; Hall) • Dynamically ensure app sticks to model • But we can’t trust the abstract model… • So what does this buy us? Monitor Process Abstract Model Untrusted Process

  6. Abstract Model Monitor Process Untrusted Process Safety Policy Static Checker Userspace Kernel System Call Interface Our Framework (2/3) • We gain analysability: • Policy several orders of magnitude simpler • Written in a high-level language

  7. Our Framework (3/3) • For now we’re using tried-and-tested syscall tracing for the dynamic enforcement Abstract Model Monitor Process Untrusted Process Userspace Kernel System Call Interface

  8. Digression: Related Work • Other researchers have built syscall monitors: • e.g. Systrace, Cerb, Subterfuge • This work invariably focuses on low-level implementation details • e.g. minimising context switches etc. • We extend this, focussing on high-level policy specification languages • Our policies increase expressivity without sacrificing analysability • We are not dead set on syscalls – good start though…

  9. Policy SpecificationLanguage • Describes abstract model of application in terms of what syscalls are allowed when. • Structured: functions, module system • Flexible: parameterised policies, syscall arg checks • Stateful: finer grained modelling of behaviour • The policy looks like a program: sequencing, iteration, conditionals, C-like syntax • Can both monitor and transform syscalls

  10. An example:(simplified ping policy) import libc; void ping(stdout, sock) { sendto(sock, *, *, *, *, *); recvfrom(sock, *, *, *, *, *); optional libc.printf(stdout); } // parameterised ping policy entry point: void main(stdin, stdout, stderr, host) { let ping_sock = socket(AF_INET, SOCK_RAW, *); multiple { ping(stdout, pingsock); } libc.exit(*); } socket ping exit

  11. Language primitives (1/3) • <syscall> ( argument patterns ) • allows one occurrence of specified syscall if arguments match patterns • e.g. sendto(sock, *, *, *, *, *) • Returned syscall argument can be bound: • E.g. let ping_sock = socket(*, SOCK_RAW, *); • Binds ping_sock to the raw socket file descriptor • Ping_sock can be used later in the policy spec.

  12. Language primitives (2/3) • statement; statements • Sequential composition • either { statement-block } or { statement-block } • Non-deterministic choice (c.f. Occam’s Alt or “|” operator in regular expressions). Models conditionals • multiple { statement-block } • Repetition (c.f. “*” operator in regular expressions). Models iteration.

  13. Language primitives (3/3) • always-allow ( call-list ) { statement-block } • Whilst inside statement-block, all calls to call-list can occur at anytime and in any order. • during { statement-block } handle { statement-block } • Models asynchronous signal handling • User-defined functions / calls • Very useful for structuring. Code re-use within policy • All calls in-lined at compile time (essentially macros)

  14. Ping revisited • Full ping policy (in paper): • Deals with signal handling, IO, host-name resolution • So it really captures the “essence of a ping”! • Only 25 lines of policy specification code • In contrast ping source is 1.5 kLoC • (Not counting the 20 header files it includes) • We win analysability because: • Spec is much shorter than application code • Policy specification language more analysable than C!

  15. Policy compilation • Policies compiled to NDFAs • This is also how we define language semantics! • Can use subset construction to convert to DFA at compile-time (not always tractable but faster code) • … or just keep track of multiple states at run-time • High-level optimisations reduce states • e.g. if “handle” block contains exit we only require one copy of it (don’t have to jump back)

  16. Transforming syscalls • Monitor process can also transform syscalls: • This allows us to code things like “chroot” and “jail” in policies • Can essentially modify certain aspects of an applications behaviour to meet security requirements • Also very useful for intercepting and transforming “execve” calls – (to be explained!) transform open(fname, flags, mode) -> open(“/chroot/” + filename, flags, mode)

  17. Policy Parameterisationis important • Consider a policy for “cp” (Unix copy program) • Without knowledge of arguments would have to allow access all areas! • This is not an ideal situation. • By parameterising over arguments we can ensure that “cp” only messes with the files you asked it to. • … but there is a subtle issue of trust here. • How do we know the shell passes the same arguments to both “cp” and its associated policy?

  18. Do we need a trustedshell? • No; we can give the shell its own monitor • Transform “execve”: • Now argument passing is job of trusted monitor • This is good: TCB remains small monitor Shell execve monitor cp

  19. Statefulness isimportant • Applications should be restricted from performing high-priviledge ops as much as possible • If we don’t model application state: • Any operation the app requires must be allowed all of the time • By modelling application state: • We restrict the places in which high priv operations can be executed • Protects against buffer-overrun attacks that subvert control flow

  20. What have weachieved? • Flexible policy specification language which captures abstract intent of program • Lots of existing work in the area of statically checking application properties (e.g. PCC) • Our work enables us to make static checking more tractable at the expense of dynamic monitoring • Varying granularity of policy allows us to move along static/dynamic checking spectrum

  21. Conclusions • We have demonstrated a policy specification language that is: • Scalable, flexible, analysable and readable • Able to describe realistic policies • We have outlined our current framework for enforcing policies. • Is syscall tracing the right way to enforce policies? • Not perfect in some respects… but a good starting point • In our work we are more interested in policy specification • How else could these policies be enforced? • Inside a Virtual Machine perhaps… e.g. JVM or Microsoft CLR • Hardware/OS support (e.g. hyperthreading?)

  22. Future Work • More work on “static policy auditing” • Proof checking • Is model checking viable in practice • Work on static/dynamic code analysis to automatically generate policies • Follows on from Wagner/Dean • How well does the work carry over to other areas? • active networks • mobile agent-based systems • extend our work on web application security with more flexible policy language

More Related