1 / 23

The Automated Refinement of a Requirements Domain Theory

The Automated Refinement of a Requirements Domain Theory. Lee McCluskey co-researchers: Margaret West Beth Richardson Department of Computing and Mathematical Sciences. Talk Outline. PART 1. Introduction to the - ATC application

kendis
Download Presentation

The Automated Refinement of a Requirements Domain Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Automated Refinement of a Requirements Domain Theory Lee McCluskey co-researchers: Margaret West Beth Richardson Department of Computing and Mathematical Sciences

  2. Talk Outline PART 1. Introduction to the - ATC application - The Conflict Prediction Specification (CPS) - CPS tools environment PART 2. Research into the Theory Revision Tool reference: Journal of Automated Software Engineering, Special Issue on Inductive Logic Programming, Spring 2001.

  3. FAROAS - A Case Study involving Aircraft Separation Criteria segment2 segment1 Shanwick Oceanic Area

  4. Example of Separation Requirement Paragraph 3.6.4.1 of the Manual of Air Traffic Services, Part 2, Section 3 - separation standards states: “For subsonic aircraft, the minimum longitudinal separation between turbojet aircraft, meeting the MNPS, and operating wholly or partly in MNPS airspace, shall 10 minutes, provided that …….ETC”

  5. The FAROAS Project - Results 1992-1994 (contract research from NATS Ltd): • encoded part of the requirements of a system that is to maintain separation between aircraft over the Atlantic Ocean in an expressive, structured logic • The kernel of this specification was written in about 500 logic axioms and is called the CPS • A validation environment was built around the CPS and helped “debug” it

  6. [(one_or_both_of Segment1 and Segment2 are_flown_at_subsonic_speed) & ( the_Aircraft_on(Segment1) and the_Aircraft_on(Segment2) meet_mnps) & ( the_Aircraft_on(Segment1) and the_Aircraft_on(Segment2) are_jets & (the_Profile_containing(Segment1) & the_Profile_containing(Segment2) are_wholly_or_partly_in_the_ mnps_airspace) ] => [(the_basic_min_longitudinal_sep _Val_in_mins_required_for Segment1 and Segment2) = 10 <=> …. ETC the_basic_min_longitudinal_sep_Val_in_mins_required_for(Segment1,Segment2,10):- are_subject_to_oceanic_cpr(Segment1,Segment2), both_are_flown_at_supersonic_speed(Segment1,Segment2), (both_are_flown_at_the_same_mach_number_in_level_flight(Segment1,Segment2) ; the_Aircraft_on_segment(Segment1,Aircraft1), the_Type_of(Aircraft1,Type1), the_Aircraft_on_segment(Segment2,Aircraft2), the_Type_of(Aircraft2,Type2), Type1=Type2, are_cruise_climbed(Segment1,Segment2) ), .. ETC CPS Auto-generated CPSlp

  7. Expert Visual Inspection Automated Syntax Checking Opportunities for bug detection in a Formal Specification CPS -an ATC requirements statement DETECT BUGS Automated Reasoning Automated Translation to Executable Software Batch Testing Simulation

  8. One Major Outcome of FAROAS • Validation and Maintenance of Complex Models (Ontologies? Domain theories? Formal Specifications? KBs?) require automated tool support to identify bugs and help remove them. • Such “models” are not written like programs to allow systematic testing but are designed to decrease the semantic gap between model and what is modelled.

  9. IMPRESS (EPSRC / NATS Ltd 96-98) IDEAS: From the Formal Specification standpoint: • The CPS is a “high level” specification - a kind of requirements domain theory - why not use theory revision (or other techniques from ML) to try to help improve the theory. From the ML standpoint: • The “fielding” of ML techniques is of great interest to the ML community.

  10. Machine Learning (ML) The investigation and construction of systems which refine existing knowledge and/or acquire new knowledge One way for learning to take place is by feeding a system (performance component) with training examples, and letting a learning component use the results to improve the system’s behaviour.

  11. Blame and Credit Assignment Construct Refinements Example Architecture of a Learning System Trace Analysis Training Examples Performance Component Results + Execution Trace

  12. Blame Assignment Inductive Refinement Algorithm Abstract Architecture of the CPS’s learning tool Proof Tree Analysis CPS in Enveloped Form Tests in Enveloped Form Meta- Interpreter Results+ Proof Trees

  13. CPS ENVIRON- MENT: ABSTRACT VIEW CPS: Many-Sorted Logic Specification + tests + queries html CPS Grammar PARSER + TRANSLATORS Envelope Tests in Prolog CPS -structured English CPS -logic program Enveloped Logic Program and Tests Theory Revision Test Harness Oracle TEST RESULTS CPS Refinements

  14. PROBLEMS! blame assignment: mark those clauses that take part in faulty proof trees - use a statistical measure to pick out those most likely to be faulty BUT operational version of CPS far from pure clausal form - e.g. with or's and not's ‘not’ a particular problem as it changes the ‘polarity’ of the proof tree..

  15. PROBLEMS! clause revision:- use a hill climbing technique to help find appropriate revisions BUT any kind of conventional TR algorithm seemed doomed to complexity problems AND typical TR operators e.g. Dropping Condition may be too superficial.

  16. Overcoming the Problems.. We explored a range of different approaches and eventually discovered a method that succeeded - it was based on TR operators that were: FOCUSSED and COMPOSITE

  17. Focus: Ordinal Sorts.... Errors in the CPS tend to occur in complex groups of ordering relations (involving sorts like Flight Level, Time, Latitude etc ) . These totally ordered sorts we call “Ordinal” - we focused on these when designing an algorithm for finding and removing errors.

  18. Composite TR Operators We created a “refine” algorithm which - stores all the instances of clauses used in faulty proofs - chooses a clause and an ordinal variable to refine - induces “regions” of ordinal values from the variable’s values in the set of faulty instances of that clause - adds/subtracts these regions and evaluates the changes by executing the theory

  19. Theory Refinement In: an imperfect theory T, training examples E Out: seq. of i revisions RS, updated theory T 1. i:=0; 2. repeat 2.1 call apply(T,E,Results,S0); 2.2 call blame(T,Results, RP); 2.3 call refine(T,RP,Results, R,Sm); 2.4 if = Sm > S0 then 2.4.1 i:=i+1; 2.4.2 T := R(T); 2.4.3 RS[i] := R; end if until Sm =< S0 3. end

  20. ML Example - Learning a new part of a Requirements Specification On May 10th, 1998, Margaret West working on the Impress project at Huddersfield University collected 237 examples of pairs of aircraft profiles “cleared” under the new RVSM requirements. She fed them to a learning system we had created and embedded in the CPS’s environment. By May 13th, the program had learned the general criteria for aircraft under RVSM conditions.

  21. RVSM Example: the_min_vertical_sep_Val_in_feet_required_for(A,B,C,D,2000) :- are_subject_to_oceanic_cpr(B,D), ( the_machno_Val_on(B,H), H<1.0, the_machno_Val_on(D,G), G<1.0, (A is_above fl(290) ; C is_above fl(290)) ; ( the_machno_Val_on(B,F), F ge 1.0 ; the_machno_Val_on(D,E), E ge 1.0 ), (A is_at_or_below fl(430) ; C is_at_or_below fl(430)) ). CHANGE ADDED: not( (H ge 0.80, H le 0.86), (A is_at_or_above fl(330)), (A is_at_or_below fl(370)), (C is_at_or_above fl(330)), (C is_at_or_below fl(370)) )

  22. Using the Theory Revision Tool: Results With one set of data, 100 per cent coverage was reached by the system making 3 revisions: 1 - a change introducing RVSM criteria, 2 - a change that removed a bug that hitherto had not been spotted 3 - a (?) meaningless change During the Impress project, the error rate of the CPS executable has decreased from hundreds of errors per 5000 tests to less than 10 errors per 5000.

  23. TR Tool Research: Conclusions We showed the potential in using learning techniques to • find and remove bugs • help in the maintenance of a formal specification of requirements The main lessons we have learned in the fielding of a TR tool are that it should use refinement methods focused at the likely sources of error within the theory, and it should be designed so as to minimise the number of times the theory has to be applied to the whole training example set.

More Related