1 / 26

Mik Kersten

Focusing Knowledge Work with Task Context. Mik Kersten. Thesis defense December 15, 2006. Problems. Information overload Many knowledge work tasks cut across system structure Browsing and query tools overload users with irrelevant information Context loss

megara
Download Presentation

Mik Kersten

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Focusing Knowledge Work with Task Context • Mik Kersten Thesis defenseDecember 15, 2006

  2. Problems • Information overload • Many knowledge work tasks cut across system structure • Browsing and query tools overload users with irrelevant information • Context loss • Tools burden users with finding artifacts relevant to the task • Context is lost whenever a task switch occurs • Users to waste time repeatedly recreating their context

  3. Thesis • A model of task context that automatically weights the relevance of system information to a task by monitoring interaction can focus a programmer's work and improve productivity. • This task context model is robust to both structured and semi-structured information, and thus applies to other kinds of knowledge work.

  4. Approach • Memory • Episodic memory: one-shot, only single exposure required • Semantic memory: multiple exposures required • Our approach • Leverage episodic memory, offload semantic memory • Tasks: episodes • Context: weighting of relevant semantics to task

  5. Related Work • Memory • Episodic memory: one-shot, only single exposure required • Semantic memory: multiple exposures required • Our approach • Leverage episodic memory, offload semantic memory • Tasks: episodes • Explicit tasks (UMEA, TaskTracer): flat model, lack of fine-grained structure • Context: weighting of relevant semantics • Slices (Weiser) and searches (MasterScope): structure only • Interaction-based (Wear-based filtering): no explicit tasks

  6. Model & Operations

  7. Model • Interaction • Task context • Degree-of-interest (DOI) weighting • Frequency and recency of interaction with element • Both direct and indirect interaction interest

  8. Topology • Task context graph • Edges added for relations between elements • Scaling factors determine shape, e.g. decay rate • Thresholds define interest levels • [l, ∞] Landmark • (0, ∞] Interesting • [-∞, 0] Uninteresting

  9. Operations • Once task context is explicit • Can treat subsets relevant to the task as a unit • Can project this subset onto the UI • Perform operations on these subsets • Composition • See context of two tasks simultaneously • Slicing • Unit test suite can be slow to run on large project • Find all interesting subtypes of TestCase c c d a b b T T

  10. More operations • Propagation • Interacting with method propagatesto containing elements • Prediction • Structurally related elements of potentialinterest automatically added to task context • Only interaction stored

  11. Implementation: programming

  12. Implementation: knowledge work

  13. Validation

  14. Validation • Questions • Does task context impact the productivity of programmers? • Does it generalize to other kinds of knowledge work? • Problems • Knowledge work environment hard to reproduce in the lab • No evidence that non-experts are a good approximation of experts • Measure long-term effects to account for diversity of tasks • Approach • Longitudinal field studies • Voluntary use of prototypes • Monitoring framework for observation

  15. Study 1: feasibility • Productivity metric • Approximate productivity with edit ratio (edits /selections) • Programmers are more productive when coding than when browsing, searching, scrolling, and navigating • Subjects • Six professional programmers at IBM • Results • Promising edit ratio improvement

  16. Study 2: programmers • Subjects • Advertised study at EclipseCon 2005 conference • 99 registered, 16 passed treatment threshold • Method and study framework • User study framework sent interaction histories to UBC server • Baseline period of 1500 events (approx 2 weeks) • Treatment period of 3000 events, to address learning curve

  17. Study 2: results • Statistically significant increase in edit ratio • Within-subjects paired t-test of edit ratio (p = 0.003) • Model accuracy • 84% of selections were of elements with a positive DOI • 5% predicted or propagated DOI • 2% negative DOI • Task activity • Most subjects switched tasks regularly • Surprises • Scaling factors roughly tuned for study, but still unchanged

  18. Study 3: knowledge workers • Subjects • 8 total, ranged from CTO to administrative assistant • Method and study framework • Same framework as previous, monitor interaction with files and web • No reliable measure of productivity, gathered detailed usage data

  19. Study 3: results • Task Activity • Users voluntarily activate tasks when provided with task context • Activations/day ranged from 1 to 11, average is 5.8 • Task context contents • Long paths are common • Density over system structure is low • Tagging did not provide a reliable mechanism for retrieval • Task context sizes • Non-trivial sizes, some large (hundreds of elements) • Many tasks had both file and web context • Model accuracy • Decay works, most elements get filtered

  20. Summary

  21. Contributions • Generic task context model • Weighted based on the frequency and recency of interaction • Supports structured and semi-structured data • Weighting is key to reducing information overload • Capturing per-task reduces loss of context when multi-tasking • Task context operations • Support growing and shrinking the model to tailor to activities • Integrate model with existing tools • Instantiation of the model • For Java, XML, generic file and web document structure • Can be extended to other kinds of domains and application platforms • Monitoring and framework • Reusable for studying knowledge work

  22. Conclusion • Tools’ point of view has not scaled • Complexity continues to grow, our ability to deal with it doesn’t • Task context takes users’ point of view • Offloads semantic memory, leverages episodic memory • Impact on researchers • University of Victoria group extended it to ontology browsing • Users • “Mylar is the next killer feature for all IDEs” Willian Mitsuda • Industry • “…it’ll ultimately become as common as tools like JUnit, if not more so.” Carey Schwaber, Forrester analyst

More Related