1 / 29

JAOUT: Automated Generation of Aspect Oriented Unit Test

JAOUT: Automated Generation of Aspect Oriented Unit Test. Guoqing Xu , Z. Yang, H. Huang, Q. Chen, L. Chen and F. Xu Software Engineering Lab (SEL) East China Normal University. APSEC ’ 04, Dec. 2 nd 2004, Pusan, Korea. Background. Test Oracle problem

ophrah
Download Presentation

JAOUT: Automated Generation of Aspect Oriented Unit Test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JAOUT: Automated Generation of Aspect Oriented Unit Test Guoqing Xu, Z. Yang, H. Huang, Q. Chen, L. Chen and F. Xu Software Engineering Lab (SEL) East China Normal University APSEC’04, Dec. 2nd 2004, Pusan, Korea

  2. Background • Test Oracle problem -- how to identify oracles in unit test? • Automating the test oracle generation -- Manually writing test codes is a labor-intensive job. -- JMLUnit framework [CL02]. • Automating the test case generation -- Korat. [BKM02] -- JMLAutoTest [XY03]

  3. Problems • Current test automating process relies on formal assertions and predicts. • Conventional specifications only focus on programs’ functional behaviors, with little support for specifying their non-functional behaviors, e.g. temporal logic, performance... • How to find a common approach to identifying oracles for testing non-functional aspects? How to automate this process?

  4. Our approach • Using a crosscutting property of the program as the criterion to check the correctness of the application in the corresponding aspect is well suited to the unit testing problems. -- In AOP crosscutting properties are used to model the program from different aspects. -- Some problems which have been difficult to handle in conventional ways are solved easily.

  5. Application-specific Aspect • A new notion: application-specific aspects (ASS) -- top-level application related aspects. -- established at the design level. -- may be picked up from low language level aspects. -- all the ASS for the same application share some common features, e.g. testing ASS, tracing ASS... -- may be translated into language level aspects.

  6. Aspect-Oriented Test Description Language • How to describe ASS? -- a formal way is needed. -- can not be too complicated. • Aspect-Oriented Test Description Language(AOTDL) -- used by the designer at design level -- can be translated into AspectJ aspects. • Basic units in AOTDL -- Utility unit -- MeaninglessCase Advice unit -- Error Advice unit advicetype (arguments): pointcuts:conditions: message

  7. AOTDL (cond.) TestingAspect tempLogic{ Utility{ protected boolean isInitialized = false; //push is reached pointcut pushReached(Stack st): target(st)&&call(void Stack.push(Integer)); //init is reached pointcut initReached(Stack st): target(st)&&call(void Stack.init(void)); // after advice after(Stack st):initReached (st){ isInitialized = true; } } Class Stack{ public void init(){...} public void push ( Node n){...} ... }

  8. AOTDL (cond.) Error Advice{ /*advices for specifying criteria of test errors */ before(Stack s): pushReached(s): ! isInitialized : ”Not Initialized” ; ... } } MeaninglessCase Advice{ /*advices for specifying criteria of meaningless test cases */ before(Stack s) : pushReached(s) : s.getSize() >=MAX : ”Overflow”; ... }

  9. AOTDL (cond.) //error advices before(Stack s) throws TestErrorException: pushReached(s){ if (!isInitialized){ TestErrorException ex =new TestErrorException(“Not Initialized”); ex.setSource(“TempLogic”); throw ex; } }

  10. JAOUT Framework Overview

  11. JAOUT: Automated Generation of AO Testing Framework • JAOUT takes Java class M.java as the input, and automatically generate JUnit test framework. -- Aspect_M_Test.java, the JUnit unit test class. -- Aspect_M_TestCase.java, the test case provider. -- Aspect_M_TestClient.java, JMLAutoTest test case generation class.

  12. Test Framework Definition • For class C and its method M(A1 a1, A2 a2…An an), the generated test suite is defined as --- C[ ] receivers -- A1[ ] vA1; ... ; An[ ] vAn; • There is a corresponding init_Ai method for each type Ai and a method init_receiver in test case provider to initialize test cases. • Testers use APIs provided by JMLAutoTest to generate test case space in test client, and pass it to the test case provider.

  13. Generated Test Method catch (TestErrorException e) { String msg = e.getSource(); fail(msg + NEW LINE + e.getMessage()); } catch (java.lang.Throwable e) { continue; } finally { setUp(); // restore test cases } } } public void testM(){ for (int i0 = 0; io < receiver.length; i0++){ for (int i1 = 0; i1 < a1.length; i1++){ … try { receiver[i0].M(a1[i1], : : :, an[in]); } catch (MeaninglessCaseException e) { /* ... tell framework test case was meaningless ... */ continue; }

  14. Test Result • ...in push • false • F • Time: 0.06 • There was 1 failure: • 1) testPush (sample.Stack_Aspect_TestCase)junit.framework.AssertionFailedError: In Testing Aspect TempLogic: Not Initilized! • at ample.Stack_Aspect_Test.testPush • (Stack_Aspect_Test.java:155) • at sun.reflect.NativeMethodAccessorImpl.invoke0 • (Native Method) • at sun.reflect.NativeMethodAccessorImpl.invoke • (NativeMethodAccessorImpl.java:39) • at sun.reflect.DelegatingMethodAccessorImpl.invoke • (DelegatingMethodAccessorImpl.java:25) • at sample.Stack_Aspect_Test.run(Stack_Aspect_Test.java:26) • at sample.Stack_Aspect_TestCase.main • (Stack_Aspect_TestCase.java:24) • FAILURES!!! • Tests run: 3, Failures: 1, Errors: 0, Meaningless:0

  15. Double-phase Testing • A large number of meaningless test inputs in the generated test case space. • It is a waste of time running programs with meaningless inputs. • The effectiveness of test results are compromised if the test is exercised with too many meaningless cases.

  16. Double-phase Testing (Cond.) • Goals: 1. Prevent meaningless test cases being processed in the final test, and therefore save the time. 2. Do not require testers to know the details of the program to be tested.

  17. Double-phase Testing (cond.) • Steps 1. Establish Operational Profile 2. The first phase test (pre-test) 3. The second phase test (final test) • Methods 1. Do the statistics based test cases selection. 2. Use the pre-test as the cost for reducing the number of meaningless cases in the final test.

  18. Working Sequence

  19. Operational Profile • Operational Profile is the criterion made by the tester to divide the generated test case space into several partitions. • the validity of double-phase testing relies on the quality of the operational profile. • it is a good idea to start out with several models and evaluate their predictive accuracy before settling on one.

  20. The first phase • Take a relatively small number (e.g. 10%) of test cases out from each partition according to the average distribution. • Run these groups of cases respectively. • Make statistics on the number of meaningless test cases appeared in each group.

  21. The second phase • Calculate and determine the probability of meaningless test cases existing in each partition. • Reorganize the test cases according to the inverse proportion of meaningless cases in each partition. (e.g. take 80% of cases from the partition which produces 20% of meaningless ones in the first phase.) • Run the final test.

  22. Experimental results A sample class public class BinaryTree{ public Node root; protected int size; public int getSize() {… } public JMLObjectSequence toObjectSet() {…} … } public class Node{ public Node left; public Node right; public int ID; } BinaryTree findSubTree(BinaryTree parentTree, Node thisNode)

  23. Testing ASS • MeaninglessCase Advice{ before(BinaryTree t) : MethodReached(t) : parentTree==null || thisNode ==null || (forall Node n; parentTree.toObjectSet().has(n); n.ID != thisNode.ID) } }

  24. Test case generation • We use JMLAutoTest to generate the test case space of type BinaryTree with a few nodes (five through eight). • We also generate the case space of type Node. It contains 12 nodes whose IDs are from 0 to 11.

  25. Divide the test case space • For the test case space of type BinaryTree, we do not divide it and leave it as the only partition. • For the space of type Node, We divide it into two partitions. The first one contains nodes whose ID varies from 0 to 5 and the second one contains the rest.

  26. A Comparison

  27. Related Work (Spec-based test) • TestEra – Automating OO test generation. [MK01] Mulsaw project, MIT • JMLUnit – Generating test oracles from JML runtime assertions. [CL02] Iowa State Univ. • Korat – Generating test case based on Java predicts. [BKM02] Mulsaw project, MIT. • JMLAutoTest – Generating test framework from JML runtime assertions and test cases based on class invariants. [XY03] SEL, ECNU. • Jov -- java automated unit test based on inferred program properties. [XN03] Univ. of Washington.

  28. Conclusions • Traditional formal predicts do not deal with non-functional properties of the program. • AOP is well suited to the unit test problems. • Designers use AOTDL to build Application-Specific Testing Aspects. • JAOUT translates Testing Aspects to AspectJ aspects automatically. • JAOUT automatically generates JUnit test framework and uses the runtime messages thrown by Testing ASS as test oracles. • JAOUT uses double-phase testing approach to filtering out meaningless cases.

  29. Thank you… Questions?

More Related