1 / 43

Program Security

Program Security. Programming Issues of Security. Types and effects of flaws and malicious code Techniques to help control program threats. Malicious Code. Definition: unexpected or undesired effects in programs or parts caused by an agent intent on damage. [Pfleeger97] Two main categories:

rholmes
Download Presentation

Program Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Security

  2. Programming Issues of Security • Types and effects of flaws and malicious code • Techniques to help control program threats

  3. Malicious Code • Definition: unexpected or undesired effects in programs or parts caused by an agent intent on damage. [Pfleeger97] • Two main categories: • Programs that compromise or change data • Programs that affect computer service

  4. Types of Malicious Code [Pfleeger97]

  5. Homes for Viruses • Candidate Homes: • Boot Sector • Memory: • Interprets keys • Error handlers • Application Program • Macro feature • Libraries: • Used by many programs • Objective: • Hard to detect • Hard to destroy or deactivate • Spreads infection widely • Can reinfect • Easy to create • Machine (and OS) independent

  6. Cause and Effects [Pfleeger97]

  7. How to Prevent Viruses • Use only commercial SW from reliable, well-established vendors • All new SW should be tested on isolated computer. • Look for unexpected behavior • Scan for viruses • Make a bootable diskette (store safely) • Modify startup files on diskette to use system files from diskette (drivers, memory mgmt sw, etc.) • Create and retain backup copies of executable system files • Enable clean install after virus infection • Use virus scanners regularly: • Multiple scanners better than just one • Update regularly

  8. Controls against Threats • Peer reviews • Good SE development practices: • Modularity: decompose task into subtasks • Encapsulation: minimize coupling between modules • Information Hiding: modules have limited effects on other modules • Independent Testing • Configuration Management • Changes are monitored carefully (protect from unintentional threats) • Once reviewed program is accepted, programmers cannot covertly make changes, such as trapdoors (protect from malicious threats) • Proofs of Program Correctness • Process Improvement: • CMM • Standards (e.g., 2167A, ISO9000)

  9. Trusted Operating Systems

  10. We will cover • Memory Protection • File protection • General object access control • User authentication

  11. Protected Objects • Multiprogramming required protection for: • Memory • Sharable I/O devices (e.g. disks) • Serially reusable devices (e.g., printers, tape drives) • Sharable programs and subprocedures • Sharable data

  12. Security Methods • Physical separation: • Processes use different physical objects • (E.g., separate printers) • Temporal separation: • Processes with different security requirements execute at different TIMES • Logical separation: • Users operate within their own “domain” • OS constrains program access to permitted parts • Cryptographic Separation: • Processes conceal data and computations from outsiders

  13. Memory Protection • Fences: • Protected sections of memory • Designed for single-user systems • Facilitates relocation (logical vs physical) • Base/bounds registers • Base register: variable fence register • Lower bound for for addresses • Bounds register: upper address limit • Supports multiple users: • each user has values for base and bounds regs • Context Switch: OS updates base/bounds regs • Can have 2 pairs of base/bounds: • One for executable programs • One for data

  14. Memory Protection (cont’d) • Tagged architecture • Some number of extra bits used to indicate access (read-only, execute-only, write) • Adjacent locns can have diff. Accesses • Separate different classes of data (numeric, char, address/pointer) • Only used for few systems • Burroughs B6500-7500 uses 3 tag bits to separate data words, descriptors (ptrs), and control words (stack ptrs and addressing control words) • IBM System/38: tag bits for integrity and access • Challenge: is compatability of code • Legacy OS systems • Cheaper memory will make this more feasible

  15. Memory Protection: Segmentation Segment Translation Table Logical Program 0 a b c de f g h i • Divide program into logical pieces • (data, code for 1 procedure) • Soln for unbounded number of base/bounds reg. with diff. access rights • Each segment has unique name: • Code or data item has address: < name, offset> • For efficiency: • Each process has seg. addr table • Benefits: • Each address reference checked for protection • Different classes of data have different types of protection • Weaknesses: • Efficiency of encoding segment names • Fragmentation of memory +

  16. Memory Protection: Paging • Page: Program divided into equal-sized pieces • Page size: power of 2 between 512 and 4096 bytes • Page Frames: Memory divided into same-sized units • Benefits: • No fragmentation problem, all pages are same size • No problem of addressing beyond end of page • Weaknesses: • Programmer has to keep track of segments • No logical unity to a page • Lose ability to have fine-grained protection mechanisms • Hybrid approach: combines advantages of both • Segments are decomposed into pages • IBM 37 • Multics OS

  17. File Protection • Based on 3-4-level format (user-group-all) • Limits granularity of access control to few levels • Organized on a per-object or per-user basis • More flexible, but can be inefficient (implementation) • Access Control Data Structures: • Access control list: • contains all subjects who have access to object and their access. • Can have default permissions for different types of users (ugw) • Access control matrix: • Each row represents subject • Each column represents object • Each entry contains set of access rights for <subj,obj>

  18. File Protection (cont’d) • Capability: unforgeable token/ticket giving certain rights to an object for holder. • Basis for Kerberos (authentication system for dist. sys) • OS holds all tickets for users (one approach) • Returns a ptr to OS data structure that also links to user • Capability only created by specific request from user to OS • Each capability identifies allowable accesses. • When process executes in a domain/local name space • Collection of objects that process has access to • During exec., if new objects are needed, • OS checks if user has access • If yes, then generates a capability/ticket for that object. • Capabilities stored in memory inaccessible by user: • Can use base/bound register or tagged arch.

  19. User Authentication • Serious issue with single, stand-alone machines • More serious with multiple (unfamiliar) users • Traditional authentication: password • Plaintext password file very vulnerable • Heavily protected or encrypted • Need to establish administrative procedures to make users’ passwords sufficiently secure • May need more security protocols to perform mutual authentication in an untrusted environment

  20. Attacks on Passwords • Try all possible passwords: • 8 chars: 261 + 262 + … 268 = 269 - 1 = 5* 1012 • 1 pw/millisec 150 yrs; 1 pw/microsec  2 mos • Try many probable passwords: • Long, uncommon, hard to spell/pronounce? • PW of 3 chars=18,278 pw  18.278 secs • PW of 4-5 chars only 8 mins and 3.5 hrs., resp. • Try passwords likely for a given user: • Beer, birthdates, names, places • Search for system file containing passwords • Ask the user [Pfleeger97]

  21. Attacks on Passwords (cont’d) [Morris and Thompson 79] • Klein (1990): 2.7% PW guessed in 15 mins of machine time; 21% guessed in a week • Spafford (1992): avg PW length 6.8 chars, 28.9% had only lowercase chars [Pfleeger97]

  22. Password Lists • Plaintext System Password list: • May protect with strong access controls (e.g.OS maintains) • Parts of OS need not have access (e.g., scheduler, acctg) • If those are compromised, then PWs are accessible • Encrypted Password file: • Conventional encryption: • PW in password file is encrypted (PWF) • User logs in, system decrypts PWF, and compares • Plaintext PW is available briefly • One-way encryption: • User logs in, system encrypts PW and compares to PWF • If PW file is accessible to public, then 2 people who have same PW can be compromised. • UNIX: uses PW extension, salt (12-bit number: system time and process identifier): E(PW+saltU)

  23. Password Selection • Use chars other than A-Z: • Use Upper/lowercase; digits • 100 hours to text all 6-char LC • 2 years to test 6-symbols (ul,ll, digits) • Long passwords: 6 char or longer • Avoid real names/words • 266 (300 M) 6-char words; 150,000 in dict • Unlikely password: 2Brn2B • Change passwords regularly • Don’t write it down • Don’t tell anyone else

  24. Authentication Process • One-time password: • Uses challenge-response system: system provides argument, user calculates math function: f(x)= x+1 • Purposefully slow authentication systems • Impersonation of Login • Non-password authentication: • Biometrics: finger/handprint, retina, voice recog.

  25. Designing Trusted OS

  26. Major Activities • Understand environment to be protected • Policies and models and their interactions • Design system to provide desired protection • Similar principles to those used in SE • Correctness of design and implementation • Verification, validation, penetration testing • Evaluation criteria  std for certification

  27. Foundations of Trusted OS • Policy: • security reqts are set of well-defined, consistent, and implementable rules that have been clearly and unambiguously expressed. • Model: • model of environment to be protected to fully understand needs • Design: means to implement model • Defines both what and how • Trust: system will meet expectations • Features: all necessary functionality to enforce security policy • Assurance: confidence that the implementation will enforce security policy [Pfleeger97]

  28. Secure vs Trusted [Pfleeger97]

  29. Security Policies • Policy: statement of security expected to be enforced by system • Review: • Military Security Policy • Basis for much work in trusted OS • Focus on protecting classified information • Commericial Security Policies: • Less rigid • Still have many similar concepts

  30. Unclassified Restricted Confidential Secret Top secret Military Security Policy • Need-to-know rule: • For info access • Need to perform jobs • Sensitivity of object O • rankO • Each piece of classified info may be associated with more than one project (compartment) • Compartments: • Cover info at 1 sensitivity level • Or include info from more than 1 • Classification of info: • < rank;compartments> • Clearance: • Indication that person is trusted to access info up to a sensitivity level [Pfleeger97]

  31. Compartments and Sensitivity • Dominance (£ ): • Relation on objects and subjects • Subject s and object o: • s £ o iff ranks £ rankO and compartmentss ÍcompartmentsO • Limit sensitivity and content of info a subject can access. • Subject can read object if: • A) subject clearance lvl at least as high as info • B) subject has need to know about ALL compartments for which info is classified • Military security enforce sensitivity and need-to-know reqts • Hierarchical reqts: sensitivity • Non-hierarchical: need to know restrictions • Access is rigidly controlled by central body.

  32. Commercial Security Policies • Less rigid than military policies • Still have objects to protect • Sensitivity levels: • Public, proprietary, internal • Due more to projects • No formal clearance policies • Access rules are less rigorous • No dominance function • Manager can override access “rules”

  33. Example Policies • Separation of duty: • Issue order, write check, receive goods • Chinese wall security policy [Brewer and Nash 89] • Addresses commercial needs for info access protection • Domain: legal, medical, investment, or acctg firms with potential conflict of interests • Objects: lowest level has elementary objects (files). • Ea. File contains info about only one company • Company groups: all info about each company clustered • Conflict classes: all groups of objects for competitors clustered. • Access policy: • Person can access any info, if never accessed info from diff. Company in same conflict class

  34. Nike Citibank Northwest Northwest Reebok Wells Fargo Bank One Example Chinese Wall Initial accesses Nike Citibank Reebok Wells Fargo Bank One After selecting Reebok and Bank One

  35. Models of Security • Objective: • Test specific policy for completeness and consistency • Document policy • Help conceptualize and design implementation • Check if implementation satisfies reqts

  36. Multilevel Security • Lattice structure: • Ordered by £ relationship • Bell-La Padula Confidentiality Model • Formal description of allowable paths of info flow in secure system • Handle data of multiple sensitivities • S: subjects; O: set of objects; • C(s) and C(o): fixed security classes for clearance and classification level, respectively. • Simple Security Property: • Subject s may have read access to an object o only if C(o) £ C(s) • *-Property: • subject s who has read access to an object o may have write access to an object p if C(o) £ C(p)

  37. O6 Secure Flow of Information High O5 Write Write O4 S2 Sensitivity Of Objects Read Read Trust of Subjects O3 Write S3 Write: (if no Read Access to Higher Sensitivity Data) Write O2 S1 Read Read O1 Low

  38. Design of Trusted OS • Saltzer Principles for design: • Least privilege: user/user program use fewest privileges possible – minimize damage • Economy of mechanism: design of protection mechanism, should be small, simple, easy to understand/analyze/verify • Open design: don’t rely on naïve attackers; open to public scrutiny ; keep key items (PW table) secret • Complete mediation: every access attempt should be checked (both direct and attempts to circumvent) • Permission-based: default should be denial of access • Separation of privilege: access should have 2 independent pieces of info (e.g., need user authentication and cryptographic key) • Least common mechanism: using physical or logical sharing reduces risks from sharing • Easy to use: easier to use will increase likelihood of use.

  39. Assurance in Trusted OS • Testing: most widely accepted • Based on real system, not an abstraction • Weaknesses: • exhaustive in nature • Black box: Based on observable effects (incomplete) • White box: Instrumenting code affects behavior • Formal Verification: • Most rigorous • Reduce OS system/properties to theorems to be proved • Weaknesses: • Can be difficult to specify entire system and prove its correctness • Much of the verification is dependent upon the correctness of the specification.

  40. Assurance in Trusted OS (cont’d) • Validation: • Can include verification • Requirement checking • Design/code reviews • Module/system testing

  41. Evaluation • Use of independent evaluators • US Orange Book: • 4 levels (and sublevels): A,B,C,D • D: no requirements; A: requires verification • 4 areas of criteria: • Security policy, • accountability, • assurance, • documentation

  42. Evaluation (cont’d) • ITSEC: Information Technology Security Criteria: • Origins: England, Germany and France independently worked on criteria • Green Book • Combine ITSEC and TCSEC (US efforts) to form Common Criteria • Combine advantages of both • Overcome weaknesses in each

  43. Example OS Implementations • Not designed for security • Unix: convenient, more user friendly • Several security problems • PR/SM : resource manager • But used separation-based implementation therefore yielding a more secure environment • Domain separation, auditing, secure communication path to PR/SM from security administrators/domain • Designed for security: • Vax Security Kernel for VAX/VMS (Digital Equipment Corp.) • A1 rating • Rigorous security kernel: strictly enforced Bell-La Padula mandatory confidentiality and Biba mandatory integrity • 1982-84: original work; evaluation 1988-90; terminated in 1990 • Lack of willingness of DEC to support 2 versions of VMS • TMach: research/product sponsored by DARPA • B3 level: Trusted version of Mach OS from CMU, now part of Open SW Foundation

More Related