1 / 14

The hangman problem … the final challenge

The hangman problem … the final challenge. The rules of hangman have changed with respect to words that contain the same letter multiple times. Instead of all instances of the letter being found when the letter is played , only a single instance needs to be displayed .

osric
Download Presentation

The hangman problem … the final challenge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The hangmanproblem … the final challenge • The rules of hangman have changedwith respect to wordsthatcontain the sameletter multiple times. • Instead of all instances of the letterbeingfoundwhen the letterisplayed, only a single instance needs to bedisplayed. • For example, in the word banana, if the playerplays the letter ‘a’ as the first move of the hangmangamethen the gamecould display either: • -a---- or • ---a-- or • -----a Yourtaskis to analyse your design and identifywhich classes/methodsneed to beupdated in order to correctlyimplementthese new rules. Which tests need to beupdated? Whichcanbere-executed? Which are nowobselete? Note thisanalysissomewhere for future use, but do not share the information withyourcolleagues CSC7302: Testing & Metrics

  2. The hangmanproblem … the final challenge • Now, take the hangman code of a colleague: • Implement the new rules by changingtheir code • Implement the new tests by changingtheir code • Execute the tests • How good wastheir design? • Didyou have to change somethingthatwas not anticipated by the original developer? • Didyou change somethingthatyoudid not have to change? • Whichmetrics have changed as a result of this update? CSC7302: Testing & Metrics

  3. The hangmanproblem … the final challenge • Tool/MethodAnalysis: • Whichtools/methodsdidyoufindmostuseful in the development of the hangman? • What sort of thingswereyoudoingthatyouthinkcould/shouldbeautomated? • Repetitive • Simplistic • What sort of support wouldyoulike for building quality code thatyoucould imagine being possible (in the future)? • What have youlearned about metrics and tests? CSC7302: Testing & Metrics

  4. Advanced Testing Tools • AutomatedTesting Tools: • Configuration management and continuousintegration • Functionaltesting • Regressiontesting • GUI testing • Tracing, profiling and performance monitoring • Automatic Test Generation: • SymbolicExecution • Model-BasedTesting • Mutation Testing • RandomTesting • What about testingwithotherverification techniques? • Model Checking • Theoremproving and refinement… CSC7302: Testing & Metrics

  5. AutomatedTesting Tools When Should a Test Be Automated? Brian Marick, TestingFoundations, 1998, marick@testing.com • There are hundreds of tools/plugins: • IDE specific • System type specific • Problemdomainspecific • Languagespecific Whenstarting on an industrialproject, learnwhatisincluded in the buildtools CSC7302: Testing & Metrics

  6. Automatic Test Generation SymbolicExecution A type of abstract interpretation Dynamic analysis of programs by tracking symbolic rather than actual values Analysis is path-based – advantages and disadvantages It is found in many advanced testing tools (like javapathfinder - http://javapathfinder.sourceforge.net/) CSC7302: Testing & Metrics

  7. Automatic Test Generation SymbolicExecution L. A. Clarke. 1976. A System to Generate Test Data and Symbolically Execute Programs. IEEE Trans. Softw. Eng. 2, 3 (May 1976), 215-222. James C. King. 1976. Symbolic execution and program testing. Commun. ACM 19, 7 (July 1976), 385-394. P. David Coward. 1988. Symbolic execution systems—a review. Softw. Eng. J. 3, 6 (November 1988), 229-239. AlsoSee: http://sites.google.com/site/symexbib/ A Bibliography of Papers on Symbolic Execution Technique and its Applications CSC7302: Testing & Metrics

  8. Automatic Test Generation Model-BasedTesting B. Korel. 1990. Automated Software Test Data Generation. IEEE Trans. Softw. Eng. 16, 8 (August 1990), 870-879. S. R. Dalal, A. Jain, N. Karunanithi, J. M. Leaton, C. M. Lott, G. C. Patton, and B. M. Horowitz. 1999. Model-basedtesting in practice. In Proceedings of the 21st international conference on Software engineering (ICSE '99). ACM, New York, NY, USA, 285-294 Arilo C. Dias Neto, Rajesh Subramanyan, Marlon Vieira, and Guilherme H. Travassos. 2007. A survey on model-basedtestingapproaches: a systematicreview. In Proceedings of the 1st ACM international workshop on Empiricalassessment of software engineering languages and technologies: held in conjunctionwith the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2007 CSC7302: Testing & Metrics

  9. Automatic Test Generation Model-BasedTesting • You should know about techniques for yourfavouritelanguages: • Modellinglanguages: • Event-B • UML • ImplementationLanguages: • Java • C/C++ CSC7302: Testing & Metrics

  10. Automatic Test Generation Mutation Testing Mutation Testing concludes with an adequacy score, known as the Mutation Score, which indicates the quality of the input test set. The mutation score (MS) is the ratio of the number of killed mutants over the total number of non-equivalent mutants. The goal of mutation analysis is to raise the mutation score to 1, indicating the test set T is sufficient to detect all the faults denoted by the mutants. An Analysis and Survey of the Development of Mutation Testing, YueJia and Harman, M., IEEE Transactions on Software Engineering, vol.37, no.5, pp.649-678, Sept.-Oct. 2011 CSC7302: Testing & Metrics

  11. Automatic Test Generation Randomtesting • Writingunit tests isoften tedious, difficult and time consuming, thus many software engineers have developed techniques and tools for automatically generating random unit tests. • There are advantages and disadvantages of this: can you think of these? • For Java, there are a number of free tools. Consider, for example: • Randoop • Jcrasher • Ecalt • Jartege • If you want to try one of these out, there is a simple eclipse plugin for Randoop: http://randoop.googlecode.com/hg/plugin/doc/index.html CSC7302: Testing & Metrics

  12. Automatic Test Generation Randomtesting: someadditionalreading Duran, Joe W.; Ntafos, Simeon C.; , An Evaluation of RandomTesting,Software Engineering, IEEE Transactions on , vol.SE-10, no.4, pp.438-444, July 1984 Christoph Csallner and Yannis Smaragdakis, JCrasher: an automaticrobustness tester for Java. Softw. Pract. Exper. 34, 11 (September 2004), 1025-1050. Patrice Godefroid, Nils Klarlund, and KoushikSen. DART: directedautomatedrandomtesting. In Proceedings of the 2005 ACM SIGPLAN conference on Programminglanguage design and implementation (PLDI '05). Catherine Oriat, Jartege: A Tool for RandomGeneration of Unit Tests for Java Classes, in Qualityof Software Architectures and Software Quality, Lecture Notes in Computer Science 2005, Volume 3712/2005, 242-256 Dick Hamlet. 2006. When only random testing will do. In Proceedings of the 1st international workshop on Random testing (RT '06) Carlos Pacheco and Michael D. Ernst. 2007. Randoop: feedback-directedrandomtesting for Java. In Companion to the 22nd ACM SIGPLAN conference on Object-orientedprogrammingsystems and applications companion (OOPSLA '07). CSC7302: Testing & Metrics

  13. Model Checking Kenneth LauchlinMcMillan. 1992. Symbolic Model Checking: An Approach to the State Explosion Problem. Ph.D. Dissertation. Carnegie Mellon Univ., Pittsburgh, PA, USA. UMI Model Checking, E Clarke 1997 Edmund M. Clarke, E. Allen Emerson, and Joseph Sifakis. 2009. Model checking: algorithmicverification and debugging. Commun. ACM 52, 11 (November 2009), 74-84. CSC7302: Testing & Metrics

  14. Theoremproving and Testing • QUESTION: What are advantages and disadvantages of dynamic vs static analysis approaches to program verification? • Combiningdynamicand static methods for analyzing programs: • Could/Should get the best of both worlds? Greta Yorsh, Thomas Ball, and MoolySagiv. 2006. Testing, abstraction, theorem proving: better together!. In Proceedings of the 2006 international symposium on Software testing and analysis (ISSTA '06). ACM, New York, NY, USA, 145-156. CSC7302: Testing & Metrics

More Related