1 / 41

The Power of Testing in the Distributed Test Architecture and a little more …

The Power of Testing in the Distributed Test Architecture and a little more …. Rob Hierons, Brunel University, London, UK Joint work with Prof Ural, University of Ottawa. Assumptions. We are interested in testing an FSM implementation against an FSM model. Assume: everything deterministic.

sanura
Download Presentation

The Power of Testing in the Distributed Test Architecture and a little more …

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Power of Testing in the Distributed Test Architectureand a little more … Rob Hierons, Brunel University, London, UK Joint work with Prof Ural, University of Ottawa Dagstuhl October 2006

  2. Assumptions • We are interested in testing an FSM implementation against an FSM model. • Assume: everything deterministic. • There is a slow environment: all outputs are received before the next input is sent. • The semantics: just traces Dagstuhl October 2006

  3. The power of testing • Assuming M is specification FSM and MI is an FSM that models implementation. • Black-box testing is capable of distinguishing M and MI if and only if they are not equivalent. • If M and MI are not equivalent then there exists an input sequence that distinguishes them. Dagstuhl October 2006

  4. A distributed test architecture • A system may have several distributed interfaces. • We may have a separate tester at each port. • If these testers cannot interact we have the distributed test architecture. • This causes additional controllability and observability problems if we do not have a global clock. Dagstuhl October 2006

  5. Upper Tester Implementation Under Test Lower Tester The architecture Dagstuhl October 2006

  6. Controllability • Assume that there is no communication between the testers and no global clock. • We cannot apply the following test. Dagstuhl October 2006

  7. Observability • Assume that there is no communication between the testers and no global clock. • We cannot distinguish between the following: Dagstuhl October 2006

  8. Overcoming these problems • We can eliminate these problems by introducing an external network through which the testers can communicate. • But: • This can increase the cost of testing. • Timing constraints can cause additional problems. • So we may want to use test sequences that have no observability or controllability problems. Dagstuhl October 2006

  9. Distinguishing states • Input sequence x distinguishes states s and s’ if and only if it leads to different output sequences from these states. Instead we need: • x causes no controllability problems from s and s’ • x leads to different sequences of interactions, for s and s’, at some port. • We say that x locally distinguishes s and s’. • If no input sequence locally distinguishes s and s’ they are locally equivalent. Dagstuhl October 2006

  10. Distinguishing machines • To distinguish machines M (spec) and MI (IUT) we need to ‘distinguish’ their initial states. • Thus, in our architecture, we have to locally distinguish their initial states. Dagstuhl October 2006

  11. xL/(-, yL) xu/(yu,-) s1 s2 xu/(yu,-) xL/(-, yL) xL/(yu,-) xL/(-, yL) xu/(yu,-) s4 s3 xu/(yU,-) Testing is weaker • We cannot locally distinguish s1 and s4 but xUxL distinguishes them. Dagstuhl October 2006

  12. Distinguishing two states • Given states s1 and s2 of a k-port FSM M with n states and port p: • s1 and s2 are locally distinguishable by an input sequence starting at p if and only if they are locally distinguished by some such input sequence of length at most k(n-1). • This bound is ‘tight’. • The sequences can be found in low-order polynomial time. Dagstuhl October 2006

  13. Distinguishing all states • A ‘complete’ set of input sequences that locally distinguish the locally distinguishable states of M can be found in O(pn2), where p is the size of the input alphabet. Dagstuhl October 2006

  14. Local minimality • An FSM is locally minimal if it has no locally equivalent states. • A minimal FSM need not be locally minimal. • If M is not locally minimal and is to be used in this architecture (with no global clock) we cannot distinguish M from some smaller FSM. • So, we might use the smaller FSM? Dagstuhl October 2006

  15. Producing a locally minimal FSM • We can simply merge locally equivalent states: if s and s’ are locally equivalent we replace one by the other. • Thus, we can achieve this in polynomial time, O(pn2). • Note – there is an O(pnlog(n)) algorithm for minimizing an FSM. Dagstuhl October 2006

  16. There are choices • ‘State merging’ is not confluent. • We can have many FSMs that are locally equivalent to M. • Question – is there some ‘natural choice’? Dagstuhl October 2006

  17. Canonical FSMs • Given FSM M, can we find: • A maximal FSM that is locally equivalent to M? • A minimal FSM that is locally equivalent to M? • Can we find them efficiently? Dagstuhl October 2006

  18. Observation • There are many notions of equivalence in the LTS literature. • However, these reduce to isomorphism when considering minimal, deterministic, completely-specified FSMs. • Local equivalence does not. • Can we combine it with e.g. ioco, tioco? Dagstuhl October 2006

  19. Test generation • A common test hypothesis when testing from FSM M: • for some predetermined m, the implementation I behaves like an unknown FSM MI that has the same input and output alphabets as M and has no more than m states. • Testing is seen as trying to decide whether MI is equivalent to M. • Can we instead use local equivalence? Dagstuhl October 2006

  20. y/… x/… s’ s y/… x/… Avoiding the problem • Maybe we can apply sequences that are not synchronizable. We get a form of non-determinism. • For example: • Question – how can we formalise this for test generation? Dagstuhl October 2006

  21. Open questions • These include the following: • Can we tailor test generation algorithms to local equivalence? • Are there sensible conditions under which local equivalence and equivalence ‘converge’? • Canonical locally equivalent (and locally minimal?) FSMs? • What about LTS models? Time? Dagstuhl October 2006

  22. The ordering of adaptive test cases for deterministic implementations Dagstuhl October 2006

  23. 0 1 0 1 b a Adaptive test cases • The next input applied depends upon the input/output that has been observed. • We reset between adaptive test cases Dagstuhl October 2006

  24. 0 1 0 1 a 0 1 0 1 0 1 b a a a Example of possible saving • Here the behaviour a/0,a/0 for the first adaptive test case tells us that we will get response a/0 to the second Dagstuhl October 2006

  25. So • We can have adaptive test cases 1 and 2 such that: • There is some possible response to 1 that determines the response of I to 2. • There may be other possible responses that don’t do this. • We denote this 21. • If we apply 1 before 2 then we may not have to use 2. Dagstuhl October 2006

  26. Consequence • The expected cost of testing depends upon the order in which the adaptive test cases are to be applied. • Question: how can we find an order that minimises the expected cost of testing? Dagstuhl October 2006

  27. Deciding  • 21 if and only if sav(1,2 ) where: sav(,null) := true sav(null,(x,f)) := false sav((x1,f1),(x2,f2)) := (x1=x2)  y.sav(f1(y),f2(y)) • Good news: this requires linear time. Dagstuhl October 2006

  28. a The relation  is not transitive. a a 0 1 0 1 0 1 a b a b a 0 0 0 1 1 1 0 1 0 1 Dagstuhl October 2006

  29. The dependence digraph • Given set  ={1,…, n} of adaptive test cases the dependence digraph (V,E) is: • V={v1,…,vn} • There is an edge from vi to vj if and only if ji. Dagstuhl October 2006

  30. Solving in terms of the dependence digraph • If we consider onlyG then the optimal order is: • The order that minimises the number of edges that ‘point backwards’ • This is an instance of the Feedback Arc Set (FAS) problem. Dagstuhl October 2006

  31. Complexity • The FAS problem is NP-hard. • Based on this it is not too difficult to prove that the problem of finding the optimal ordering is NP-hard. Dagstuhl October 2006

  32. Problems • We will consider: • Reducing the size of the optimisation problem: • Merging adaptive test cases. • Dividing the problem. • Producing a ‘good’ ordering based on the dependence digraph. Dagstuhl October 2006

  33. 0 1 0 1 0 1 0 1 a b a a Merging adaptive test sequences • These can be merged • Linear complexity Dagstuhl October 2006

  34. Observation • There may be more than one possible result of merging elements of a set of adaptive test cases. • Question: how can we do this in the ‘best’ way. • A problem for future work Dagstuhl October 2006

  35. Independent adaptive test cases • We might remove the direction of the edges in the dependence digraph to form G’. • Then two adaptive test cases i and j are said to be independent if and only if there is no path from vi to vj in G’ Dagstuhl October 2006

  36. Reducing the size of the problem • We can: • Merge compatible adaptive test cases • Separately consider the classes of independent adaptive test cases. • Result: • these two approaches do not ‘conflict’. Dagstuhl October 2006

  37. A special case: acyclic dependence digraph • This is easy: • We find an ordering based on a DAG. Dagstuhl October 2006

  38. Where G contains cycles • We can: • Solve the FAS problem to find some feedback arc set A. • Let G’=(V,E\A) • Now solve • However, this ignores information that may be useful. Dagstuhl October 2006

  39. Non-deterministic implementation • We can produce results by either: • Making a fairness assumption • Assuming that all possible observations have at least a given probability • Making no assumptions • The stronger the assumptions made, the greater the potential for reducing the cost of testing. Dagstuhl October 2006

  40. Further work • The following problems have yet to be investigated: • Infinite adaptive test cases (i.e. not finite trees). • Choosing the order in which to merge. • On-the-fly techniques. • Optimisation when considering a wider range of sources of information. • Timed or distributed adaptive test cases. • Empirical studies. Dagstuhl October 2006

  41. Conclusions • We have seen regarding: • The cost of applying adaptive test cases. • The power of testing in the distributed test architecture. • In each case there is still much to be done! • Question: is this problem different for open and reactive systems? Dagstuhl October 2006

More Related