Security through obscurity
This presentation is the property of its rightful owner.
Sponsored Links
1 / 32

Security Through Obscurity PowerPoint PPT Presentation


  • 92 Views
  • Uploaded on
  • Presentation posted in: General

Security Through Obscurity. Clark Thomborson Version of 7 December 2011 f or Mark Stamp’s CS266 at SJSU. Questions to be (Partially) Answered. What is security? What is obscurity? Is obscurity necessary for security? How can we obscure a computation or a communication?.

Download Presentation

Security Through Obscurity

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Security through obscurity

Security Through Obscurity

Clark Thomborson

Version of

7 December 2011

for Mark Stamp’s CS266 at SJSU


Questions to be partially answered

Questions to be (Partially) Answered

  • What is security?

  • What is obscurity?

  • Is obscurity necessary for security?

  • How can we obscure a computation or a communication?

Obscurity 31Oct11


What is security a taxonomic approach

What is Security?(A Taxonomic Approach)

  • The first step in wisdom is to know the things themselves;

    • this notion consists in having a true idea of the objects;

  • objects are distinguished and known by

    • classifying them methodically and

    • giving them appropriate names.

  • Therefore, classification and name-giving will be the foundation of our science.

    [Carolus Linnæus, SystemaNaturæ, 1735]

Obscurity 31Oct11


Standard taxonomy of security

Standard Taxonomy of Security

  • Confidentiality: no one is allowed to read, unless they are authorised.

  • Integrity: no one is allowed to write, unless they are authorised.

  • Availability: all authorised reads and writes will be performed by the system.

  • Authorisation: giving someone the authority to do something.

  • Authentication: being assured of someone’s identity.

  • Identification: knowing someone’s name or ID#.

  • Auditing: maintaining (and reviewing) records of security decisions.

Obscurity 31Oct11


A hierarchy of security

A Hierarchy of Security

  • Static security: the Confidentiality, Integrity, and Availability properties of a system.

  • Dynamic security: the technical processes which assure static security.

    • The gold standard:Authentication, Authorisation, Audit.

    • Defense in depth: Prevention, Detection, Response.

  • Security governance: the “people processes” which develop and maintain a secure system.

    • Governors set budgets and delegate their responsibilities for Specification, Implementation, and Assurance.

Obscurity 31Oct11


A full range of static security

Security

Security

C

P−

I

G

A

P+

C

I

G

A Full Range of Static Security

  • Confidentiality, Integrity, and Availability are properties of data objects, allowing us to specify “information security”.

  • What about computer security? Data + executables.

    • Unix directories have “rwx” permission bits.

    • If all executions are authorised, then the system has “X-ity”.

    • GuiJuFangYuanZhiZhiYe a new English word “guijuity”

  • Let’s use a classifier, rather than listing some classes!

    • Confidentiality, Integrity, and Guijuity are Prohibitions (P+).

    • Availability is a general Permission (P−), with 3 subclasses.

W

X

R


Prohibitions and permissions

Prohibitions and Permissions

  • Prohibition: disallow an action.

  • Permission: allow an action.

  • There are two types of P-secure systems:

    • In a prohibitive system, all actions are prohibited by default. Permissions are granted in special cases, e.g. to authorised individuals.

    • In a permissive system, all actions are permitted by default. Prohibitions are special cases, e.g. when an individual attempts to access a secure system.

  • Prohibitive systems have permissive subsystems.

  • Permissive systems have prohibitive subsystems.

Obscurity 31Oct11


Recursive security

E11

E12

Recursive Security

  • Prohibitions, i.e. “Thou shalt not kill.”

    • General rule: An action (in some range P−) is prohibited, with exceptions (permissions) E1, E2, E3, ...

  • Permissions, i.e. a “licence to kill” (James Bond).

    • General rule: An action in P+ is permitted, with exceptions (prohibitions) E1, E2, E3, ...

  • Static security is a hierarchy of controls on actions:

P+: permitted

E2

E1: prohibited

E3

Obscurity 31Oct11


Is our taxonomy complete

Is Our Taxonomy Complete?

  • Prohibitions and permissions are properties of hierarchicalsystems, such as a judicial system.

    • Most legal controls (“laws”) are prohibitive: they prohibit certain actions, with some exceptions (permissions).

  • Contracts are non-hierarchical, agreed between peers, consisting of

    • Obligations: requirements to act, i.e. prohibitions on future inaction.

    • Exemptions: exceptions to an obligation, i.e. permissions for future inaction

  • Obligations and exemptions are not P-security rules.

    • Obligations arise occasionally in the law, e.g. a doctor’s “duty of care” or a trustee’s fiduciary responsibility.

Obscurity 31Oct11


Forbiddances and allowances

S

S

Pro

Forbid

Per

Obl

Allow

Exe

Pro

Obl

Per

Exe

Forbiddances and Allowances

  • Obligations are forbidden inactions; Prohibitions are forbidden actions.

    • When we take out a loan, we are obligated to repay it. We are forbidden from never repaying.

  • Exemptions are allowed inactions; Permissions are allowed actions.

    • In the English legal tradition, a court can not compel a person to give evidence which would incriminate their spouse (husband or wife). This is an exemption from a general obligation to give evidence.

  • We have added a new level to our hierarchy!

Obscurity 31Oct11


Reviewing our questions

Reviewing our Questions

  • What is security?

    • Three layers: static, dynamic, governance.

    • A taxonomic structure for static security: (forbiddances, allowances) x (actions, inactions).

    • Four types of static security rules: prohibitions (on reading C, writing I, executing G); permissions (R, W, X); obligations (OR, OW, OX), and exemptions (ER, EW, EX).

    • Most existing systems are underspecified on permissions, obligations, and exemptions.

  • What is obscurity?

  • Is obscurity necessary for security?

Obscurity 31Oct11


Obscurity opacity steganography cryptography

Obscurity, Opacity, Steganography, Cryptography

  • Obscure: difficult to see

  • Opaque: impossible to see through

    • Not antonyms, but connotative...

  • Steganography: surreptitious communication

    • Axiomatically “obscure”, may be trustworthy.

    • Goal: adversary is unaware of comms (“stealthy”)

  • Cryptography: secret communication

    • Axiomatically “opaque”, may be untrustworthy.

    • Goal: adversary is unable to interpret comms.

Obscurity 31Oct11


Unifying the model

Unifying the Model

  • Transmitter (Alice)

  • Receiver (Bob)

  • Secret Message (M)

    Encryption:

  • Alice sends e(M, k) to Bob on channel C.

  • Bob computes M ← d(e(M, k), k’) using secret k’.

  • Charles, the adversary, knows C, e( ), d( ).

Obscurity 31Oct11


Steganographic comms

SteganographicComms

  • Alice uses an obscure channel C.

    • Bob must know “where and when” to look for a message from Alice.

  • Alice uses an obscure coding e( ).

    • Bob must know “how to interpret” Alice’s message.

  • Alice & Bob must be stealthy:

    • Additional traffic on C must not be obvious.

    • Interpretation of e( ) must not be obvious.

    • e(M) must seem “normal” for C.

Obscurity 31Oct11


An example stegoblogging

An Example: Stegoblogging

  • Alice gains write access to a disused (or new) blog or wiki X.

  • Alice selects “covertext” from an existing blog or wiki on a similar subject

  • Alice writes her “stegomessage”, one bit at a time, by selecting homonyms or misspellings from a dictionary for words in the covertext that are selected at random with low probability from the covertext.

  • Bob must know (or guess) X; he can find the covertextby googling on the “stegotext”; then he can read the stegomessage.

  • Bob leverages his prior knowledge of X: the stegomessage should be longer than a URL!

Obscurity 31Oct11


The importance of secrets

The Importance of Secrets

  • Charles has a feasible attack, if he locates the stegotextor can guess a cryptokey.

    • He needs a very long sequence of cryptotext, if the cipher and key are both “strong”.

  • It is generally difficult or expensive for Alice and Bob to establish the secret(s) required to set up their channel. Exceptions:

    • A memory stick can hold many gigabytes (but how can Alice transmit it securely to Bob?)

    • Alice and Bob can use the Diffie-Hellman algorithm, even if Charles is eavesdropping (but how can Alice be sure she’s talking to Bob?)

Obscurity 31Oct11


Evaluating cryptosecurity

Evaluating Cryptosecurity

  • Cryptography is assumed secure in practice, but we can’t measure this security.

    • Cryptographic methods are not used, unless they are trusted. Axiom 1: the “crack rate” 1/t is very small.

    • Big targets! Only a few methods in widespread use.

    • Axiom 2: if anyone cracks a widely used cipher, we’ll soon know (time parameter t’).

    • Design implication: we need a backup cipher, and an ability to shift to it quickly (parameter t”)

    • Axiom 3: trusted ciphers will be created at rate > 1/t.

    • Axiom 4: key secrecy is maintained (we need obscurity).

    • Design implication: any single-key breach and rekeying should have negligible cost.

    • Then: the cost of cryptosecurity is B/t, where B is the cost of a breach that persists fort’+t”.

Obscurity 31Oct11


Evaluating insecurity

Evaluating Insecurity

  • Steganography is assumed insecure in practice.

    • If Bob knows where and when to look, and how to interpret, why doesn’t Charles also know this?

    • Bob must be stealthy when listening and interpreting: Charles may learn.

    • Axiom 1: our stegosystems will be cracked at rate 1/t (Poisson process).

    • Design implication: we must shift stegosystems at rate > 1/t.

    • The cost of stegosecurity is B/t, where B is the cost of each breach.

Obscurity 31Oct11


Practicalities

Practicalities

  • Available stegosystems may have such large 1/t that they’re uneconomic, even for systems with small B.

  • It may be impossible to purchase insurance to cover B for a system which relies on a highly trusted (“small 1/t”) cipher to attain its moderate B/t.

  • Implication: don’t rely solely on cryptography (or steganography)!

Obscurity 31Oct11


Defense in depth

Defense in Depth

  • Ideally, security is preventative.

    • A single preventive layer may be insufficient.

  • “Defence in depth” through

    • Additional preventive layer(s); or

    • Layer(s) that “respond” to a detected breach.

  • Goals of detect & respond systems

    • To detect breaches more rapidly (reducing t’)

    • To respond more appropriately (reducing B)

Obscurity 31Oct11


Security techniques

Security Techniques

  • Prevention:

    • Deter attacks on forbiddances using encryption, obfuscation, cryptographic hashes, watermarks, or trustworthy computing.

    • Deter attacks on allowances using replication (or other resilient algorithmic techniques), obfuscation.

  • Detection:

    • Monitor subjects (user logs). Requires user ID: biometrics, ID tokens, or passwords.

    • Monitor actions (execution logs, intrusion detectors). Requires code ID: cryptographic hashing, watermarking.

    • Monitor objects (object logs). Requires object ID: hashing, watermarking.

  • Response:

    • Ask for help: Set off an alarm (which may be silent –steganographic), then wait for an enforcement agent.

    • Self-help: Self-destructive or self-repairing systems. If these responses are obscure, they’re more difficult to attack.

Obscurity 31Oct11


Too much to think about

Too Much to Think About!

  • We can’t discuss all uses of obscurity in security during a single seminar.

  • Let’s focus on a subset of the forbiddances: the guijuities.

    • Obscurity is also helpful in assuring exceptions. (Bureaucracies rely heavily on this technique ;-)

Obscurity 31Oct11


Opacity vs obscurity in cig

Opacity vs Obscurity in CIG

  • Confidentiality (access control on reads)

    • Encryption vs. stegocommunication

  • Integrity (access control on writes)

    • Cryptographic signatures vs. fragile watermarks

  • Guijuity (access control on executions)

    • Homomorphic encryption vs. obfuscation

    • Opacity is only feasible for very simple computations (mul-adds, FSAs).

    • In practice, we use obscurity to assure our guijuities.

Obscurity 31Oct11


What is obfuscation

What is Obfuscation?

  • Obfuscation is a semantics-preserving transformation of computer code that renders it difficult to analyse – thus impossible to modify safely.

    • This enforces guijuityon the current platform.

  • To secure guijuity in cases where the code itself is the protected resource, we need a ‘tether’.

    • Tethered code uses a platform ID in its guijuity decisions (e.g. license-enforcement).

Obscurity 31Oct11


How to obfuscate software

How to Obfuscate Software?

  • Lexical layer: obscure the names of variables, constants, opcodes, methods, classes, interfaces, etc. (Important for interpreted languages and named interfaces.)

  • Data obfuscations:

    • obscure the values of variables (e.g. by encoding several booleans in one int; encoding one int in several floats; encoding values in enumerable graphs)

    • obscure data structures (e.g. transforming 2-d arrays into vectors, and vice versa).

  • Control obfuscations (to be explained later)

Obscurity 31Oct11


Attacks on data obfuscation

Attacks on Data Obfuscation

  • An attacker may be able to discover the decoding function, by observing program behaviour immediately prior to output: print( decode( x ) ), where x is an obfuscated variable.

  • An attacker may be able to discover the encoding function, by observing program behaviour immediately after input.

  • A sufficiently clever human will eventually de-obfuscate any code. Our goal is to frustrate an attacker who wants to automate the de-obfuscation process.

  • More complex obfuscations are more difficult to de-obfuscate, but they tend to degrade program efficiency and may enable pattern-matching attacks.

Obscurity 31Oct11


Cryptographic obfuscations

Cryptographic Obfuscations?

  • Cloakware have patented a “homomorphic obfuscation” method: add, mul, sub, and divide by constant, using the Chinese Remainder Theorem.

    • W Zhu, in my group, fixed a bug in their division algorithm.

  • An ideal data obfuscator would have a cryptographic key that selects one of 264 encoding functions.

  • Fundamental vulnerability: The encoding and decoding functions must be included in the obfuscated software. Otherwise the obfuscated variables cannot be read and written.

    • “White-box cryptography” is an obfuscated code that resists automated analysis, deterring adversaries who would extract a working implementation of the keyed functions or of the keys themselves.

Obscurity 31Oct11


Practical data obfuscation

Practical Data Obfuscation

  • Barak et al. have proved that “perfect obfuscation” is impossible, but “practical obfuscation” is still possible.

  • We cannot build a “black box” (as required to implement an encryption) without using obfuscation somewhere – either in our hardware, or in software, or in both.

  • In practical obfuscation, our goal is to find a cost-effective way of preventing our adversaries from learning our secret for some period of time.

    • This places a constraint on system design – we must be able to re-establish security after we lose control of our secret.

    • “Technical security” is insufficient as a response mechanism.

    • Practical systems rely on legal, moral, and financial controls to mitigate damage and to restore security after a successful attack.

Obscurity 31Oct11


Control obfuscations

Control Obfuscations

  • Inline procedures

  • Outline procedures

  • Obscure method inheritances (e.g. refactor classes)

  • Opaque predicates:

    • Dead code (which may trigger a tamper-response mechanism if it is executed!)

    • Variant (duplicate) code

    • Obscure control flow (“flattened” or irreducible)

Obscurity 31Oct11


History of software obfuscation

History of Software Obfuscation

  • “Hand-crafted” obfuscations: IOCCC (Int’l Obfuscated C Code Contest, 1984 - ); a few earlier examples.

  • InstallShield(1987 - present).

  • Automated lexical obfuscations since 1996: Crema, HoseMocha, …

  • Automated control obfuscations since 1996: Monden, …

  • Opaque predicates since 1997: Collberget al., …

  • Commercial vendors since 1997: Cloakware, Microsoft (in their compiler).

  • Commercial users since 1997: Adobe DocBox, Skype, …

  • Obfuscation is still a small field, with just a handful of companies selling obfuscation products and services. There are only a few non-trivial results in conference or journal articles, and a few dozen patents.

Obscurity 31Oct11


Summary review

Summary / Review

  • A taxonomy of static security:

    (forbiddance, allowance) x (action, inaction) =

    (prohibition, permission, obligation, exemption).

  • Some uses of opacity and obscurity, in the design of secure systems.

  • An argument that obscurity is necessary, in practice, for secure systems.

Obscurity 31Oct11


The future

The Future?

  • What if our primary design goal were …

    • Transparency (and translucency)?

      • Our systems would assure integrity.

      • We’d know what happened, and could respond appropriately.

    • Predictability (and guessability)?

      • Our systems would assure availability.

      • We could hold each other accountable for our actions – fewer excuses (“the dog ate it”, “the system crashed”).

  • Opacity and obscurity are preventative, fearful.

    • Would it be brave, or would it be foolish, to design forward-looking systems by relying on transparency or predictability, instead of opacity?

Obscurity 31Oct11


  • Login