Download
faults analysis n.
Skip this Video
Loading SlideShow in 5 Seconds..
Faults - analysis PowerPoint Presentation
Download Presentation
Faults - analysis

Faults - analysis

97 Views Download Presentation
Download Presentation

Faults - analysis

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Faults - analysis • Z proof and System Validation Tests were most cost-effective. • Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code.

  2. Proof metrics • Probably the largest program proof effort attempted… • c. 9000 VCs - 3100 Functional & Safety properties, 5900 from RTC generator. • 6800 discharged by simplifier (hint: buy a bigger workstation!) • 2200 discharged by SPARK proof checker or “rigorous argument.”

  3. Proof metrics - comments • Simplification of VCs is computationally intensive, so buy the most powerful server available. • (1998 comment) A big computer is far cheaper than the time of the engineers using it! • (Feb. 2001 comment) Times have changed - significant proofs can now be attempted on a £1000 PC! • Proof of exception-freedom is extremely useful, and gives real confidence in the code. • Proof is still far less effort than module testing.

  4. Difficult bits... • User-interface. • Tool support. • Introduced state.

  5. User Interface • Sequential code & serial interface to displays. • Driving an essentially parallel user-interface is difficult. • e.g. Updating background pages, run-indicator, button tell-backs etc. • Some of the non-SIL4 displays were complex, output-intensive and under-specified in SRS.

  6. Tool support • SPARK tools are now much better than they were five years ago! Over 50 improvements identified as a result of SHOLIS. • SPARK 95 would have helped. • Compiler has been reliable, and generates good code. • Weak support in SPARK proof system for fixed and floating point. • Many in-house static analysis tools developed: WCET analysis, stack analysis, requirements traceability tools all new and successful.

  7. Introduced state • Some faults owe to introduced state: • Optimisation of graphics output. • Device driver complexity. • Co-routine mechanisms.

  8. SHOLIS - Successes • One of the largest Z/SPARK developments ever. • Z proof work proved very effective. • One of the largest program proof efforts ever attempted. • Successful proof of exception-freedom on whole system. • Proof of system-level safety-properties at both Z and code level.

  9. SHOLIS - Successes(2) • Strong static analysis removes many common faults before they even get a chance to arrive. • Software Integration was trivial. • Successful use of static analysis of WCET and stack use. • Successful mixing of SIL4 and non-SIL4 code is one program using SPARK static analysis. • The first large-scale project to meet 00-55 SIL4. SHOLIS influenced the revision of 00-55 between 1991 and 1997.

  10. 00-55/56 Resources • http://www.dstan.mod.uk/ • “Is Proof More Cost-Effective Than Testing?” King, Chapman, Hammond, and Pryor. IEEE Transactions on Software Engineering, Volume 26, Number 8. August 2000.

  11. Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions

  12. Outline • UK ITSEC and Common Criteria schemes • What are they? • Who’s using them? • Main Principles • Main Requirements • Practical Consequences • Example Project - the MULTOS CA

  13. The UK ITSEC Scheme • The “I.T. Security Evaluation Criteria” • A set of guidelines for the development of secure IT systems. • Formed from an effort to merge the applicable standards from Germany, UK, France and the US (the “Orange Book”).

  14. ITSEC - Basic Concepts • The “Target of Evaluation” (TOE) is an IT System (possibly many components). • The TOE provides security (e.g. confidentiality, integrity, availability)

  15. ITSEC - Basic Concepts (2) • The TOE has: • Security Objectives (Why security is wanted.) • Security Enforcing Functions (SEFs) (What functionality is actually provided.) • Security Mechanisms (How that functionality is provided.) • The TOE has a Security Target • Specifies the SEFs against which the TOE will be evaluated. • Describes the TOE in relation to its environment.

  16. ITSEC - Basic Concepts (3) • The Security Target contains: • Either a System Security Policy or a Product Rationale. • A specification of the required SEFs. • A definition of required security mechanisms. • A claimed rating of the minimum strength of the mechanisms. (“Basic, Medium, or High”, based on threat analysis) • The target evaluation level.

  17. ITSEC Evaluation Levels • ITSEC defines 7 levels of evaluation criteria, called E0 through E6, with E6 being the most rigorous. • E0 is “inadequate assurance.” • E6 is the toughest! Largely comparable with the most stringent standards in the safety-critical industries.

  18. ITSEC Evaluation Levelsand Required Informationfor Vulnerability Analysis

  19. Evaluation • To claim compliance with a particular ITSEC level, a system or product must be evaluated against that level by a Commercial Licensed Evaluation Facility (CLEF). • Evaluation reports answers “Does the TOE satisfy its security target at the level of confidence indicated by the stated evaluation level?” • A list of evaluated products and systems is maintained.

  20. ITSEC Correctness Criteria for each Level • Requirements for each level are organized under the following headings: • Construction - The Development Process • Requirements, Architectural Design, Detailed Design, Implementation • Construction - Development Environment • Configuration Control, Programming Languages and Compilers, Developer Security • Operation - Documentation • User documentation, Administrative documentation • Operation - Environment • Delivery and Configuration, Start-up and Operation

  21. ITSEC Correctness Criteria - Examples • Development Environment - Programming languages and compilers • E1 - No Requirement • E3 - Well defined language - e.g. ISO standard. Implementation dependent options shall be documented. The definition of the programming languages shall define unambiguously the meaning of all statements used in the source code. • E6 - As E3 + documentation of compiler options + source code of any runtime libraries.

  22. The Common Criteria • The US “Orange Book” and ITSEC are now being replaced by the “Common Criteria for IT Security Evaluation.” • Aims to set a “level playing field” for developers in all participating states. • UK, USA, France, Spain, Netherlands, Germany, Korea, Japan, Australia, CanAda, Israel... • Aims for international mutual recognition of evaluated products.

  23. CC - Key Concepts • Defines 2 type of IT Security Requirement: • Functional Requirements • Defines behaviour of system or product. • What a product or system does. • Assurance Requirements • For establishing confidence in the implemented security functions. • Is the product built well? Does it meet its requirements?

  24. CC - Key Concepts (2) • A Protection Profile (PP) - A set of security objectives and requirements for a particular class of system or product. • e.g. Firewall PP, Electronic Cash PP etc. • A Security Target (ST) - A set of security requirements for specifications for a particular product (the TOE), against which its evaluation will be carried out. • e.g. The ST for the DodgyTech6000 Router

  25. CC Requirements Hierarchy • Functional and assurance requirements are categorized into a hierarchy of: • Classes • e.g. FDP - User Data Protection • Families • e.g. FDP_ACC - Access Control Policy • Components • e.g. FDP_ACC.1 - Subset access control • These are named in PPs and STs.

  26. Evaluations Assurance Levels (EALs) • CC Defined 7 EALs - EAL1 through EAL7 • An EAL defines a set of functional and assurance components which must be met. • For example, EAL4 requires ALC_TAT.1, while EAL6 and EAL7 require ALC_TAT.3 • EAL7 “roughly” corresponds with ITSEC E6 and Orange Book A1.

  27. The MULTOS CA • MULTOS is a multi-application operating system for smart cards. • Applications can be loaded and deleted dynamically once a card is “in the field.” • To prevent forging, applications and card-enablement data are signedby the MULTOS Certification Authority (CA). • At the heart of the CA is a high-security computer system that issues these certificates.

  28. The MULTOS CA (2) • The CA has some unusual requirements: • Availability - aimed for c. 6 months between reboots, and has warm-standby fault-tolerance. • Throughput - system is distributed and has custom cryptographic hardware. • Lifetime - of decades, and must be supported for that long. • Security - most of system is tamper-proof, and is subject to the most stringent physical and procedural security. • Was designed to meet the requirements of U.K. ITSEC E6. • All requirements, design, implementation, and (on-going) support by Praxis Critical Systems.

  29. The MULTOS CA - Development Approach • Overall process conformed to E6 • Conformed in detail where retro-fitting impossible: • development environment security • language and specification standards • CM and audit information • Reliance on COTS for E6 minimized or eliminated. • Assumed arbitrary but non-byzantine behaviour

  30. Development approach limitations • COTS not certified (Windows NT, Backup tool, SQL Server…) • We were not responsible for operational documentation and environment • No formal proof • No systematic effectiveness analysis

  31. System Lifecycle • User requirements definition with REVEALTM • User interface prototype • Formalisation of security policy and top level specification in Z. • System architecture definition • Detailed design including formal process structure • Implementation in SPARK, Ada95 and VC++ • Top-down testing with coverage measurement

  32. Some difficulties... • Security Target - What exactly is an SEF? • No one seems to have a common understanding… • “Formal description of the architecture of the TOE…” • What does this mean? • Source code or hardware drawings for all security relevant components… • Not for COTS hardware or software.

  33. The CA Test System

  34. Use of languages in the CA • Mixed language development - the right tools for the right job! • SPARK 30% “Security kernel” of tamper-proof software • Ada95 30% Infrastructure (concurrency, inter-task and inter-process communications, database interfaces etc.), bindings to ODBC and Win32 • C++ 30% GUI (Microsoft Foundation Classes) • C 5% Device drivers, cryptographic algorithms • SQL 5% Database stored procedures

  35. Use of SPARK in the MULTOS CA • SPARK is almost certainly the only industrial-strength language that meets the requirements of ITSEC E6. • Complete implementation in SPARK was simply impractical. • Use of Ada95 is “Ravenscar-like” - simple, static allocation of memory and tasks. • Dangerous, or new language features avoided such as controlled types, requeue, user-defined storage pools etc.

  36. Conclusions - Process Successes • Use of Z for formal security policy and system spec. helped produce an indisputable specification of functionality • Use of Z, CSP and SPARK “extended” formality into design and implementation • Top-down, incremental approach to integration and test was effective and economic

  37. Conclusions - E6 Benefits and Issues • E6 support of formality is in-tune with our “Correctness by Construction” approach • encourages sound requirements and specification • we are more rigorous in later phases • High-security using COTS both possible and necessary • cf safety world • E6 approach sound, but clarifications useful • and could gain even higher levels of assurance... • CAVEAT • We have not actually attempted evaluation • but benefits from developing to this standard

  38. ITSEC and CC Resources • ITSEC • www.cesg.gov.uk • Training, ITSEC Documents, UK Infosec Policy, “KeyMat”, “Non Secret Encryption” • www.itsec.gov.uk • Documents, Certified products list, Background information. • Common Criteria • csrc.nist.gov/cc • www.commoncriteria.org • Mondex • Ives, Blake and Earl Michael: Mondex International: Reengineering Money. London Business School Case Study 97/2. See http://isds.bus.lsu.edu/cases/mondex/mondex.html

  39. Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions

  40. Programme • Introduction • What is High Integrity Software? • Reliable Programming in Standard Languages • Coffee • Standards Overview • DO178B and the Lockheed C130J • Lunch • Def Stan 00-55 and SHOLIS • ITSEC, Common Criteria and Mondex • Tea • Compiler and Run-time Issues • Conclusions

  41. Outline • Choosing a compiler • Desirable properties of High-Integrity Compilers • The “No Surprises” Rule

  42. Choosing a compiler • In a high-integrity system, the choice of compiler should be documented and justified. • In a perfect world, we would have time and money to: • Search for all candidate compilers, • Conduct an extensive practical evaluation of each, • Choose one, based on fitness for purpose, technical features and so on...

  43. Choosing a compiler (2) • But in the real-world… • Candidate set of compilers may only have 1 member! • Your client’s favourite compiler is already bought and paid for… • Bias and/or familiarity with a particular product may override technical issues.

  44. Desirable Properties of an HI compiler • Much more than just “Validation” • Annex H support • Qualification • Optimization and other “switches” • Competence and availability of support • Runtime support for HI systems • Support for Object-Code Verification

  45. What does the HRG Report have to say? • Recommends validation of appropriate annexes - almost certainly A, B, C, D, and H. Annex G (Numerics) may also be applicable for some systems. • Does not recommend use of a subset compiler, although recognizes that a compiler may have a mode in which a particular subset is enforced. • Main compiler algorithms should be unchanged in such a mode.

  46. HRG Report (2) • Evidence required from compiler vendor: • Quality Management System (e.g. ISO 9001) • Fault tracking and reporting system • History of faults reported, found, fixed etc. • Availability of test evidence • Access to known faults database • A full audit of a compiler vendor may be called for.

  47. Annex H Support • Pragma Normalize_Scalars • Useful! Compilers should support this, but remember that many scalar types do not have an invalid representation. • Documentation of Implementation Decisions • Yes. Demand this from compiler vendor. If they can’t or won’t supply such information, then find out why not!

  48. Annex H Support (2) • Pragma Reviewable. • Useful in theory. Does anyone implement this other than to “turn on debugging”? • Pragma Inspection_Point • Yes please. Is particularly useful in combination with hardware-level debugging tools such as in-circuit emulation, processor probes, and logic analysis.

  49. Annex H Support (3) • Pragma Restrictions • Useful. • Some runtime options (e.g. Ravenscar) imply a predefined set of Restrictions defined by the compiler vendor. • Better to use a coherent predefined set than to “roll your own” • Understand effect of each restriction on code-gen and runtime strategies. • Even in SPARK, some restrictions are still useful - e.g. No_Implicit_Heap_Allocation, No_Floating_Point, No_Fixed_Point etc.

  50. Compiler Qualification • “Qualification” (whatever that means) of a full compiler is beyond reach. • Pragmatic approaches: • Avoidance of “difficult to compile” language features in HI subset. • In service history • Choice of “most commonly used” options • Access to faults history and database • Verification and Validation • Object code verification (last resort!)