1 / 23

CAMELOT: A Testing Methodology for Computer Supported Cooperative Work

CAMELOT: A Testing Methodology for Computer Supported Cooperative Work. 36 th Hawaii International Conference on System Science Experimental Software Engineering Track January 7, 2003 Kona, Big Island, Hawaii. Robert F. Dugan Jr. Deptartment of Computer Science Stonehill College

levia
Download Presentation

CAMELOT: A Testing Methodology for Computer Supported Cooperative Work

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CAMELOT: A Testing Methodology for Computer Supported Cooperative Work 36th Hawaii International Conference on System Science Experimental Software Engineering Track January 7, 2003 Kona, Big Island, Hawaii Robert F. Dugan Jr. Deptartment of Computer Science Stonehill College Easton, MA 02357 USA bdugan@stonehill.edu Ephraim P. Glinert, Edwin H. Rogers Deptartment of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180 USA {glinert,rogerseh}@cs.rpi.edu

  2. What Is CSCW? “Computer-based systems that support groups of people engaged in a common task and that provide an interface to a shared environment” [Ellis91] e-mail Newsgroups Chat CSCL Meeting Support Shared Windows Multiuser Editing

  3. CSCW Characterization Human- Human Interaction Human- Computer Interaction • Floor Control • Coupling • Awareness • Iterative Design • Undo/Redo • Real-time CSCW • Session Management • Synchronization • IPC • Performance • Reliability Distributed Systems Network Communication

  4. Motivation Develop Test Human-Human Interface Human-Computer Interface Rework

  5. Overview • Motivation • Survey of Testing • Methodology • Evaluation • Limitations/Future Work

  6. Goals of Testing • Correct behavior • Utility • Reliability • Robustness • Performance

  7. Research Testing • Early stages of life cycle (Requirements, Functional Specification, Design) • Cost rises deeper into life cycle • Problems scaling to large development efforts • Problem space complex • Requirements in flux • Verification cost exceeds implementation cost • Example:

  8. Commercial Testing • Later stages of life cycle (Implementation, Integration, System Test) • Less expensive to create individual tests • Use standard communication APIs (GUI events, HTTP, RMI, etc.) • Capture/Replay communication to drive application execution • Problems: • Fragile: if application communication changes, so must test case • Late life cycle means problems uncovered more costly • Rudimentary guidelines for use • Example: Mercury Interactive WinRunner

  9. CAMELOTCSCW Application MEthodoLOgy for Testing

  10. General Computing Camelot Code Cycle Description • Implementation [Meyers79] • Integration [Scach90] • System test [Meyers79] Implementation GC.IM.1 Functional Test GC.ST.12 Procedure Test Acceptance Test GC.ST.13

  11. Human Computer Interaction Camelot Code Description Usability Criteria • Intersection with General Computing [Yip91] • Usability criteria [Schneiderman97] • Golden rules [Schneiderman97] • User interface technology [Schneiderman97] HCI.UC.1 Time to learn system: How long does it take for a typical user to learn to use the system? Error Messages HCI.UITG.9 HCI.UITG.10 Color

  12. Distributed Computing Camelot Code Description • Scalability • GC & HCI intersections Race Condition DC.RC.1 DC.RC.2 Centralized Architecture Decentralized Architecture Loosely Coupled DC.S.8 DC.S.9 Synchronization • Race conditions • Deadlock • Temporal consistency

  13. Human-Human Interaction Camelot Code Description • Communication • Coordination • Coupling Communication HHI.CM.1 Network bandwidth sufficient to support user communication. DC/HHI.5 Distributed computing scalability tests. Derived from (DC.S ^ HHI.CP) -> DC/HHI.5 DC/HHI.6 Distributed computing temporal consistency tests. Derived from (DC.TC ^ HHI.CP) -> DC/HHI.6 • Security • Awareness

  14. Evaluation: RCN ISServer rcnClient RCNPublicServer rcnClient rcnClient

  15. Evaluation: RCN • Rensselaer Collaborative Network • Characteristic of CSCW Software • Face-to-face, Synchronous, Meeting Support • Group Management, Chat, Shared Windowing • Floor Control, Asynchronous Multiuser Editing • Mature Application • Unit, System, User Acceptance, Daily Use Tested • Development considered it bug free • Development offered to deliberately introduce bugs! • Ammendable to Rebbeca-J • Java • Source code available

  16. Evaluation Single User General Computing Human Computer Interaction Distributed Computing Human-Human Interaction Multi-User

  17. Evaluation “i don't think any tester would have ever discovered that. simply for discovering that, i consider rebecca a success.” - J.J. Johns, lead developer for RCN

  18. Limitations/Future Work • Single system evaluated • No formal testing methodology used by RCN team • Subset of CSCW technology • Large number of guidelines • Lack of ready-to-run test cases • Testing domains for specific technologies • Example: Chat Domain • Not a complete evaluation

  19. CAMELOT: Discussion • Comparison to existing methodologies • SSM [Checkland89] • PETRA [Ross et al. 95] • SESL [Ramage99] • ECW Methodology [Drury et al. 99] • Part of a complete evaluation • Correct ordering of the evaluation is important. Technical Social

  20. Conclusion We defined CAMELOT, a methodology for testing CSCW applications. • Our methodology has improved prior art by providing a detailed focus on CSCW technology. • We have created CSCW software taxonomy with single user with general computing and human computer interaction components and multiuser with distributed computing and human-human interaction components. • For each component we have identified explicit validation techniques that can be used in both manual and automated testing. • Further, our techniques exploit intersections between the components to improve bug detection.

More Related