1 / 32

Security Through Obscurity

Security Through Obscurity. Clark Thomborson Version of 7 December 2011 f or Mark Stamp’s CS266 at SJSU. Questions to be (Partially) Answered. What is security? What is obscurity? Is obscurity necessary for security? How can we obscure a computation or a communication?.

tylerm
Download Presentation

Security Through Obscurity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security Through Obscurity Clark Thomborson Version of 7 December 2011 for Mark Stamp’s CS266 at SJSU

  2. Questions to be (Partially) Answered • What is security? • What is obscurity? • Is obscurity necessary for security? • How can we obscure a computation or a communication? Obscurity 31Oct11

  3. What is Security?(A Taxonomic Approach) • The first step in wisdom is to know the things themselves; • this notion consists in having a true idea of the objects; • objects are distinguished and known by • classifying them methodically and • giving them appropriate names. • Therefore, classification and name-giving will be the foundation of our science. [Carolus Linnæus, SystemaNaturæ, 1735] Obscurity 31Oct11

  4. Standard Taxonomy of Security • Confidentiality: no one is allowed to read, unless they are authorised. • Integrity: no one is allowed to write, unless they are authorised. • Availability: all authorised reads and writes will be performed by the system. • Authorisation: giving someone the authority to do something. • Authentication: being assured of someone’s identity. • Identification: knowing someone’s name or ID#. • Auditing: maintaining (and reviewing) records of security decisions. Obscurity 31Oct11

  5. A Hierarchy of Security • Static security: the Confidentiality, Integrity, and Availability properties of a system. • Dynamic security: the technical processes which assure static security. • The gold standard:Authentication, Authorisation, Audit. • Defense in depth: Prevention, Detection, Response. • Security governance: the “people processes” which develop and maintain a secure system. • Governors set budgets and delegate their responsibilities for Specification, Implementation, and Assurance. Obscurity 31Oct11

  6. Security Security C P− I G A P+ C I G A Full Range of Static Security • Confidentiality, Integrity, and Availability are properties of data objects, allowing us to specify “information security”. • What about computer security? Data + executables. • Unix directories have “rwx” permission bits. • If all executions are authorised, then the system has “X-ity”. • GuiJuFangYuanZhiZhiYe a new English word “guijuity” • Let’s use a classifier, rather than listing some classes! • Confidentiality, Integrity, and Guijuity are Prohibitions (P+). • Availability is a general Permission (P−), with 3 subclasses. W X R

  7. Prohibitions and Permissions • Prohibition: disallow an action. • Permission: allow an action. • There are two types of P-secure systems: • In a prohibitive system, all actions are prohibited by default. Permissions are granted in special cases, e.g. to authorised individuals. • In a permissive system, all actions are permitted by default. Prohibitions are special cases, e.g. when an individual attempts to access a secure system. • Prohibitive systems have permissive subsystems. • Permissive systems have prohibitive subsystems. Obscurity 31Oct11

  8. E11 E12 Recursive Security • Prohibitions, i.e. “Thou shalt not kill.” • General rule: An action (in some range P−) is prohibited, with exceptions (permissions) E1, E2, E3, ... • Permissions, i.e. a “licence to kill” (James Bond). • General rule: An action in P+ is permitted, with exceptions (prohibitions) E1, E2, E3, ... • Static security is a hierarchy of controls on actions: P+: permitted E2 E1: prohibited E3 Obscurity 31Oct11

  9. Is Our Taxonomy Complete? • Prohibitions and permissions are properties of hierarchicalsystems, such as a judicial system. • Most legal controls (“laws”) are prohibitive: they prohibit certain actions, with some exceptions (permissions). • Contracts are non-hierarchical, agreed between peers, consisting of • Obligations: requirements to act, i.e. prohibitions on future inaction. • Exemptions: exceptions to an obligation, i.e. permissions for future inaction • Obligations and exemptions are not P-security rules. • Obligations arise occasionally in the law, e.g. a doctor’s “duty of care” or a trustee’s fiduciary responsibility. Obscurity 31Oct11

  10. S S Pro Forbid Per Obl Allow Exe Pro Obl Per Exe Forbiddances and Allowances • Obligations are forbidden inactions; Prohibitions are forbidden actions. • When we take out a loan, we are obligated to repay it. We are forbidden from never repaying. • Exemptions are allowed inactions; Permissions are allowed actions. • In the English legal tradition, a court can not compel a person to give evidence which would incriminate their spouse (husband or wife). This is an exemption from a general obligation to give evidence. • We have added a new level to our hierarchy! Obscurity 31Oct11

  11. Reviewing our Questions • What is security? • Three layers: static, dynamic, governance. • A taxonomic structure for static security: (forbiddances, allowances) x (actions, inactions). • Four types of static security rules: prohibitions (on reading C, writing I, executing G); permissions (R, W, X); obligations (OR, OW, OX), and exemptions (ER, EW, EX). • Most existing systems are underspecified on permissions, obligations, and exemptions. • What is obscurity? • Is obscurity necessary for security? Obscurity 31Oct11

  12. Obscurity, Opacity, Steganography, Cryptography • Obscure: difficult to see • Opaque: impossible to see through • Not antonyms, but connotative... • Steganography: surreptitious communication • Axiomatically “obscure”, may be trustworthy. • Goal: adversary is unaware of comms (“stealthy”) • Cryptography: secret communication • Axiomatically “opaque”, may be untrustworthy. • Goal: adversary is unable to interpret comms. Obscurity 31Oct11

  13. Unifying the Model • Transmitter (Alice) • Receiver (Bob) • Secret Message (M) Encryption: • Alice sends e(M, k) to Bob on channel C. • Bob computes M ← d(e(M, k), k’) using secret k’. • Charles, the adversary, knows C, e( ), d( ). Obscurity 31Oct11

  14. SteganographicComms • Alice uses an obscure channel C. • Bob must know “where and when” to look for a message from Alice. • Alice uses an obscure coding e( ). • Bob must know “how to interpret” Alice’s message. • Alice & Bob must be stealthy: • Additional traffic on C must not be obvious. • Interpretation of e( ) must not be obvious. • e(M) must seem “normal” for C. Obscurity 31Oct11

  15. An Example: Stegoblogging • Alice gains write access to a disused (or new) blog or wiki X. • Alice selects “covertext” from an existing blog or wiki on a similar subject • Alice writes her “stegomessage”, one bit at a time, by selecting homonyms or misspellings from a dictionary for words in the covertext that are selected at random with low probability from the covertext. • Bob must know (or guess) X; he can find the covertextby googling on the “stegotext”; then he can read the stegomessage. • Bob leverages his prior knowledge of X: the stegomessage should be longer than a URL! Obscurity 31Oct11

  16. The Importance of Secrets • Charles has a feasible attack, if he locates the stegotextor can guess a cryptokey. • He needs a very long sequence of cryptotext, if the cipher and key are both “strong”. • It is generally difficult or expensive for Alice and Bob to establish the secret(s) required to set up their channel. Exceptions: • A memory stick can hold many gigabytes (but how can Alice transmit it securely to Bob?) • Alice and Bob can use the Diffie-Hellman algorithm, even if Charles is eavesdropping (but how can Alice be sure she’s talking to Bob?) Obscurity 31Oct11

  17. Evaluating Cryptosecurity • Cryptography is assumed secure in practice, but we can’t measure this security. • Cryptographic methods are not used, unless they are trusted. Axiom 1: the “crack rate” 1/t is very small. • Big targets! Only a few methods in widespread use. • Axiom 2: if anyone cracks a widely used cipher, we’ll soon know (time parameter t’). • Design implication: we need a backup cipher, and an ability to shift to it quickly (parameter t”) • Axiom 3: trusted ciphers will be created at rate > 1/t. • Axiom 4: key secrecy is maintained (we need obscurity). • Design implication: any single-key breach and rekeying should have negligible cost. • Then: the cost of cryptosecurity is B/t, where B is the cost of a breach that persists fort’+t”. Obscurity 31Oct11

  18. Evaluating Insecurity • Steganography is assumed insecure in practice. • If Bob knows where and when to look, and how to interpret, why doesn’t Charles also know this? • Bob must be stealthy when listening and interpreting: Charles may learn. • Axiom 1: our stegosystems will be cracked at rate 1/t (Poisson process). • Design implication: we must shift stegosystems at rate > 1/t. • The cost of stegosecurity is B/t, where B is the cost of each breach. Obscurity 31Oct11

  19. Practicalities • Available stegosystems may have such large 1/t that they’re uneconomic, even for systems with small B. • It may be impossible to purchase insurance to cover B for a system which relies on a highly trusted (“small 1/t”) cipher to attain its moderate B/t. • Implication: don’t rely solely on cryptography (or steganography)! Obscurity 31Oct11

  20. Defense in Depth • Ideally, security is preventative. • A single preventive layer may be insufficient. • “Defence in depth” through • Additional preventive layer(s); or • Layer(s) that “respond” to a detected breach. • Goals of detect & respond systems • To detect breaches more rapidly (reducing t’) • To respond more appropriately (reducing B) Obscurity 31Oct11

  21. Security Techniques • Prevention: • Deter attacks on forbiddances using encryption, obfuscation, cryptographic hashes, watermarks, or trustworthy computing. • Deter attacks on allowances using replication (or other resilient algorithmic techniques), obfuscation. • Detection: • Monitor subjects (user logs). Requires user ID: biometrics, ID tokens, or passwords. • Monitor actions (execution logs, intrusion detectors). Requires code ID: cryptographic hashing, watermarking. • Monitor objects (object logs). Requires object ID: hashing, watermarking. • Response: • Ask for help: Set off an alarm (which may be silent –steganographic), then wait for an enforcement agent. • Self-help: Self-destructive or self-repairing systems. If these responses are obscure, they’re more difficult to attack. Obscurity 31Oct11

  22. Too Much to Think About! • We can’t discuss all uses of obscurity in security during a single seminar. • Let’s focus on a subset of the forbiddances: the guijuities. • Obscurity is also helpful in assuring exceptions. (Bureaucracies rely heavily on this technique ;-) Obscurity 31Oct11

  23. Opacity vs Obscurity in CIG • Confidentiality (access control on reads) • Encryption vs. stegocommunication • Integrity (access control on writes) • Cryptographic signatures vs. fragile watermarks • Guijuity (access control on executions) • Homomorphic encryption vs. obfuscation • Opacity is only feasible for very simple computations (mul-adds, FSAs). • In practice, we use obscurity to assure our guijuities. Obscurity 31Oct11

  24. What is Obfuscation? • Obfuscation is a semantics-preserving transformation of computer code that renders it difficult to analyse – thus impossible to modify safely. • This enforces guijuityon the current platform. • To secure guijuity in cases where the code itself is the protected resource, we need a ‘tether’. • Tethered code uses a platform ID in its guijuity decisions (e.g. license-enforcement). Obscurity 31Oct11

  25. How to Obfuscate Software? • Lexical layer: obscure the names of variables, constants, opcodes, methods, classes, interfaces, etc. (Important for interpreted languages and named interfaces.) • Data obfuscations: • obscure the values of variables (e.g. by encoding several booleans in one int; encoding one int in several floats; encoding values in enumerable graphs) • obscure data structures (e.g. transforming 2-d arrays into vectors, and vice versa). • Control obfuscations (to be explained later) Obscurity 31Oct11

  26. Attacks on Data Obfuscation • An attacker may be able to discover the decoding function, by observing program behaviour immediately prior to output: print( decode( x ) ), where x is an obfuscated variable. • An attacker may be able to discover the encoding function, by observing program behaviour immediately after input. • A sufficiently clever human will eventually de-obfuscate any code. Our goal is to frustrate an attacker who wants to automate the de-obfuscation process. • More complex obfuscations are more difficult to de-obfuscate, but they tend to degrade program efficiency and may enable pattern-matching attacks. Obscurity 31Oct11

  27. Cryptographic Obfuscations? • Cloakware have patented a “homomorphic obfuscation” method: add, mul, sub, and divide by constant, using the Chinese Remainder Theorem. • W Zhu, in my group, fixed a bug in their division algorithm. • An ideal data obfuscator would have a cryptographic key that selects one of 264 encoding functions. • Fundamental vulnerability: The encoding and decoding functions must be included in the obfuscated software. Otherwise the obfuscated variables cannot be read and written. • “White-box cryptography” is an obfuscated code that resists automated analysis, deterring adversaries who would extract a working implementation of the keyed functions or of the keys themselves. Obscurity 31Oct11

  28. Practical Data Obfuscation • Barak et al. have proved that “perfect obfuscation” is impossible, but “practical obfuscation” is still possible. • We cannot build a “black box” (as required to implement an encryption) without using obfuscation somewhere – either in our hardware, or in software, or in both. • In practical obfuscation, our goal is to find a cost-effective way of preventing our adversaries from learning our secret for some period of time. • This places a constraint on system design – we must be able to re-establish security after we lose control of our secret. • “Technical security” is insufficient as a response mechanism. • Practical systems rely on legal, moral, and financial controls to mitigate damage and to restore security after a successful attack. Obscurity 31Oct11

  29. Control Obfuscations • Inline procedures • Outline procedures • Obscure method inheritances (e.g. refactor classes) • Opaque predicates: • Dead code (which may trigger a tamper-response mechanism if it is executed!) • Variant (duplicate) code • Obscure control flow (“flattened” or irreducible) Obscurity 31Oct11

  30. History of Software Obfuscation • “Hand-crafted” obfuscations: IOCCC (Int’l Obfuscated C Code Contest, 1984 - ); a few earlier examples. • InstallShield(1987 - present). • Automated lexical obfuscations since 1996: Crema, HoseMocha, … • Automated control obfuscations since 1996: Monden, … • Opaque predicates since 1997: Collberget al., … • Commercial vendors since 1997: Cloakware, Microsoft (in their compiler). • Commercial users since 1997: Adobe DocBox, Skype, … • Obfuscation is still a small field, with just a handful of companies selling obfuscation products and services. There are only a few non-trivial results in conference or journal articles, and a few dozen patents. Obscurity 31Oct11

  31. Summary / Review • A taxonomy of static security: (forbiddance, allowance) x (action, inaction) = (prohibition, permission, obligation, exemption). • Some uses of opacity and obscurity, in the design of secure systems. • An argument that obscurity is necessary, in practice, for secure systems. Obscurity 31Oct11

  32. The Future? • What if our primary design goal were … • Transparency (and translucency)? • Our systems would assure integrity. • We’d know what happened, and could respond appropriately. • Predictability (and guessability)? • Our systems would assure availability. • We could hold each other accountable for our actions – fewer excuses (“the dog ate it”, “the system crashed”). • Opacity and obscurity are preventative, fearful. • Would it be brave, or would it be foolish, to design forward-looking systems by relying on transparency or predictability, instead of opacity? Obscurity 31Oct11

More Related