1 / 42

An Approach to Large-Scale Collection of Application Usage Data Over the Internet

An Approach to Large-Scale Collection of Application Usage Data Over the Internet. David M. Hilbert David F. Redmiles Information and Computer Science University of California, Irvine Irvine, California 92697-3425 {dhilbert,redmiles}@ics.uci.edu http://www.ics.uci.edu/pub/eden/. Overview.

lumina
Download Presentation

An Approach to Large-Scale Collection of Application Usage Data Over the Internet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Approach to Large-Scale Collection of Application Usage Data Over the Internet David M. HilbertDavid F. Redmiles Information and Computer Science University of California, Irvine Irvine, California 92697-3425 {dhilbert,redmiles}@ics.uci.edu http://www.ics.uci.edu/pub/eden/

  2. Overview • Motivation • Problem • Solution • Approach - Expectation-driven event monitoring (EDEM) • Usage Scenario • Agent Authoring and Architecture • Conclusions

  3. Motivation • Successful use of an interactive application depends on the behavior of, and interactions between: • an application • its users • the environment in which it is embedded • Such factors are typically complex, dynamic, and poorly understood enough to be impossible to model effectively. • Thus, empirical evaluation of software systems in actual use is critical.

  4. Motivation (cont’d) • Developers and managers need answers to empirical questions such as: • how is this application being used? • does actual usage conform to expectations? • which features if modified, added, or deleted are most likely to impact application utility, usability, and productivity? • which features warrant more or less development and testing effort? • how thoroughly has beta testing exercised relevant features? • which beta testers are worth contacting for more information? • how can the design be improved to better match actual usage?

  5. Problem • Prototyping, beta testing, usability testing (and other iterative design techniques) help • refine system requirements, detect erroneous or unexpected system or user behavior, evaluate utility and usability • Unfortunately: • traditional usability testing limited in size, scope, location, duration • beta testers collect data manually, lack of proper incentives, data quality and quantity is sacrificed • Furthermore: • rapid and distributed deployment of systems (e.g., over the Internet) reduces opportunities for traditional user testing while increasing the variety of use situations and number of end users.

  6. Solution • Expectation-Driven Event Monitoring (EDEM) enables developers to easily and cheaply benefit from usage information • Developers identify usability expectations, create agents to monitor user interactions, agents are deployed with applications. • Agents detect mismatches in expected versus actual system use. • Agents monitor use passively or allow users to provide feedback. • Agents support purposeful redesign of prototypes.

  7. Approach • Expectation-driven event monitoring (EDEM)

  8. Expectations

  9. Expectations in Development • Developers have usage expectations that importantly affect design decisions. • Developers' usage expectations are based on: • knowledge of requirements • knowledge of application domain • knowledge of specific user tasks, practices, and work environments • past experience developing systems • past experience using applications themselves • Developers' usage expectations impact the appropriateness and usability of their designs: • accurate expectations => good designs • inaccurate expectations => poor designs

  10. Characteristics of Expectations • Some usage expectations are represented explicitly. • e.g. those specified as requirements or in "use cases" • Most usage expectations are implicit. • e.g. those encoded in window layout, toolbar and menu design, key assignments, and user interface libraries. • Example usage expectations: • users complete forms from left to right and top to bottom. • frequently used features are easy to access, recognize, and use. • Because most usage expectations are not represented explicitly, they often: • fail to be tested adequately • fail to be explicitly recognized by developers

  11. Resolving Mismatches • Detecting and resolving mismatches between developers' expectations and actual usage can help improve: • design, automation, on-line help, training, and use. • Once mismatches are detected, they may be resolved in one of two ways: • Developers may modify their expectations to better match actual use, thus refining system requirements and eventually improving the design. • Users may learn about developers' expectations, thus learning how to use the existing system more effectively.

  12. Usage Scenario

  13. Usage Scenario • Monitoring critical sequences of actions in a cargo query form

  14. Usage Scenario • A hypothetical phone service provisioning form

  15. Agent Notification (Optional) • Agents may post messages

  16. User Response (Optional) • Users may provide feedback

  17. Repository for Review • Agent reported data and user feedback stored for review

  18. Authoring Agents • This agent fires when a user edits the City or State fields while the Zip field is empty { Authored Agents } Trigger } Guard } Actions

  19. Selecting Events • Developer expresses interest in detecting when the user begins editing the State field

  20. EDEM Configuration • Agent specifications downloaded from URL • Agent-collected data and user feedback reported via email

  21. Development Computer DevelopmentComputer Java Virtual Machine Java Virtual Machine AgentSpecs CollectedData Top Level Window& UI Events Top Level Window& UI Events ApplicationUI Components ApplicationUI Components EDEMActive Agents EDEMActive Agents Property Queries Property Queries HTTPServer EDEMServer Property Values Property Values User Computer EDEM Architecture Agent Specs saved w/ URL Agent Reports sent via E-mail Agent Specs loaded via URL

  22. Agent Representation

  23. Agent Representation • Agents are instances of a simple Java class w/ the following members: • Trigger: patterns of user interface (or agent) events • Guard: boolean expression involving user interface (or agent) state • Actions: pre-supplied actions or arbitrary code • Triggers are continually checked as users interact w/ the application. • Guards are checked if an agent trigger has been activated. • Actions are performed if the guard evaluates to true.

  24. Triggers • Triggers specified in terms of the following patterns: (1) "A or B or . . . " (2) "A and B and . . . " (3) "A then B then . . . " (4) "(A and B) with no interleaving C" (5) "(A then B) with no interleaving C" • Where variables A,B,C are filled in by specifying: (1) a Component from the UI plus an AWT or EDEM event on that component (e.g. "TextField1:LOST_EDIT" which occurs when TextField1 is edited and then input focus shifts and editing begins in another component) (2) another Agent (e.g. "AddressDone" which occurs when another agent detects that the address section has been completed)

  25. Guards • Guards specified in terms of the following patterns: (1) "A or B or . . . " (2) "A and B and . . . " • Where variables A,B are filled in by specifying: (1) a Component from the UI and some expression involving its properties (e.g. "TextField1:value='Married'" or "Button1:count>100") (2) another Agent and some expression involving its properties (e.g. "AddressStarted:enabled=true" or "AddressDone:count>1")

  26. Actions • Actions may include arbitrary code, but usually involve pre-supplied actions such as: • generating higher level events for further hierarchical event processing • interacting with users to provide suggestions and/or collect feedback, and • reporting data back to developers

  27. Integrating with EDEM • void initialize() • load agents • void addMonitors(Object obj) • recursively add monitors to this component and all subcomponents • void setName(Object obj, String name) • name any component to be monitored that doesn't have a unique label • void processEvent(Event evt) • pass events to EDEM for processing • void finalize() • remove monitors and send log & summary

  28. Conclusion

  29. Conclusions • Usage expectations: • focus data collection • raise awareness of implications of design decisions • Agent-based event monitoring architecture: • distributed event analysis and data reduction • independent evolution of instrumentation and application • Extensible event model: • data collected and analyzed at multiple levels of abstraction. • agents can be used to collect domain, task, and organizational knowledge not available at design time.

  30. Current and Future Work • Extend event model beyond AWT events to JavaBeans events and support input and output of external events • Default agents for standard analyses and Wizard support for agent authoring and reuse • More flexible analysis and reporting with database integration (JDBC) for storage, visualization, and post-hoc analysis • Better integration of expectations into development process, e.g. with Use Cases, Cognitive Walkthroughs, Task Analysis • Agent maintenance, configuration management, and versioning • Security and privacy • Evaluation

  31. Related Work

  32. Related/Supporting Technologies • Related • Collaborative remote usability testing techniques • Beta test data collection (e.g. Aqueduct Profiler) • API usage monitoring (e.g. HP/Tivoli ARM API) • Enterprise management (e.g. TIBCO Hawk) • Model-based distributed debugging (e.g. EBBA & TSL) • Supporting • Event notification systems (e.g. TIB/Rendezvous) • Mobile agent infrastructure (e.g. ObjectSpace Voyager)

  33. Collaborative Remote Usability • Collaborative video and electronic whiteboards allow traditional usability testing to be done remotely. • EDEM and collaborative remote usability techniques might be used independently or in concert depending on the application and evaluation goals. • URL for information on remote usability testing: • http://hci.ise.vt.edu/~josec/remote_eval/

  34. EDEM asynchronous non-intrusive quantitative behavioral & performance data plus user comments potentially large numbers of concurrent subjects ideal for large-scale, ongoing studies of usage Remote Usability synchronous intrusive video capture of behavior & performance that can be reviewed later, plus verbal protocols single or small groups of subjects ideal for small-scale, focused experiments Collaborative Remote Usability

  35. Beta Test Data Collection • Aqueduct Profiler collects information over the Internet about the usage of applications in beta test. • Aqueduct provides an API for collecting application-specific information (e.g., feature usage) which is reported, via Email, along with other generic measures such as operating system, execution time, crashes, etc. • EDEM and Aqueduct collect information that is both related and complementary, using techniques that are complementary. • URL for Aqueduct Software: • http://www.aqueduct.com/

  36. EDEM developers define agents which may be modified and delivered separately from code. captures information about feature usage captures information about usability aspects more readily Java only Aqueduct developers instrument code requiring redelivery when instrumentation is modified. captures information about feature usage captures information about crashes more readily multiple platforms Beta Test Data Collection

  37. API Monitoring • An application response-time measurement (ARM) API allows data regarding usage of an API (as opposed to a UI) to be captured. • Instruments all important API calls to indicate start of call, characteristics of parameters, and end of call. • Information is used to identify performance bottle-necks and parameter and API usage. • EDEM and ARM could be used independently or in concert depending on application & evaluation goals. • URL for HP and Tivoli’s proposed standard: • http://www.hp.com/openview/rpm/arm/

  38. EDEM: collects information about UI usage developers define agents which may be modified and delivered separately from code. general UI events An ARM API: collects information about API usage developers instrument code requiring redelivery when instrumentation is modified. specific API events API Monitoring

  39. Enterprise Management • Enterprise management tools help administrators manage nodes within a wide area network by monitoring processes, CPU utilization, applications, network statistics, log files, and file system activity. • Rule bases are (often) used to specify what to monitor and how to report and react to problems. • An API allows developers to instrument applications to be monitored & controlled. • URL for TIBCO’s HAWK Enterprise Monitor: • http://www.tibco.com/products/hawk_ds.html

  40. EDEM: focuses on UI events use of agents to collect information and take actions exploits existing event model TIBCO’s Hawk: focuses on network monitoring & management use of agents to collect information and take actions comes w/ built-in agents to monitor specific operating systems and common applications, otherwise, API is used. Enterprise Management

  41. Distributed Debugging • Model-based distributed debugging techniques allow specification and monitoring of abstract models of, or formal constraints on, the behavior of event-based concurrent systems. • Techniques used to specify event patterns & computed properties are related. • References: • P.C. Bates. Debugging heterogeneous distributed systems using event-based models of behavior. ACM Transactions on Computer Systems. Vol. 13, No. 1, 1995. • D.S. Rosenblum, Specifying Concurrent Systems with TSL, IEEE Software, Vol. 8, No. 3, 1991.

  42. Supporting Technologies • Event Notification • EDEM uses SMTP to asynchronously report agent-collected data. • TIB/Rendezvous allows events to be synchronously reported based on a publish/subscribe paradigm. • URL for TIB/Rendezvous: • http://www.rv.tibco.com/ • Mobile Agent Technology • EDEM uses HTTP to transport agents. • ObjectSpace Voyager provides a more flexible and capable platform for agent mobility based on an agent-enhanced object request broker (ORB) paradigm. • URL for ObjectSpace Voyager: • http://www.objectspace.com/

More Related