1 / 16

INCOSE Usability Working Group

INCOSE Usability Working Group. Scott Workinger , David Lempia For INCOSE Usability Working Group 2011 To sign up for the MBSE usability team go to: http://groups.google.com/group/mbse-usability You will start to receive meeting invites and discussion invites. Agenda. Introductions

nerice
Download Presentation

INCOSE Usability Working Group

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INCOSE Usability Working Group Scott Workinger, David Lempia For INCOSE Usability Working Group 2011 To sign up for the MBSE usability team go to: http://groups.google.com/group/mbse-usability You will start to receive meeting invites and discussion invites.

  2. Agenda • Introductions • 8:00 – 9:00 – Create Hi-Value Use Cases • 9:00 – 9:30 - Usability Presentation by Jen Narkevicius • 9:30 – 10:00 - Create Hi-Value Use Cases • 10:00 – 10:30 – Rank Order Use Cases • 10:30 – 10:45 –Break

  3. Agenda • 10:45 – 11:00 – Next steps? • Summary Presentation for group out brief • Fill out survey

  4. High Value Use Case Dimensions SysML Diagram Type Requirement Use Case Activity Sequence State Chart IBD BDD Parametric Package Cross Cutting Customizations (Profile) Requirements – 50% of the problems are introduced in requirements - SEI Designs are 50-70% correct going into testing – Reynolds Validation Verification Integration Prototype Implementation Collaboration Design / Trades Analysis & Simulation Req / Agreement Trace & Impact analysis Op Con Generate (Artifacts / Documents) Configuration Management Process Step Tool Type

  5. Create Hi-Value Use Cases • Break up into teams of 5-8, elect one spokesperson • Assign one use case dimension for each table to focus on • Goal of each table is to submit 3-5 hi-value use cases • Use case goal – Goal Title • For Example - Decide which candidate architecture best meets structural design objectives • People involved (Who does, Who uses, Who provides inputs) • For example – Lean Systems Engineer Does, Developers of system need results, Stakeholders provide architecturally significant requirements • Describe the value of this use case • How often does an engineer do this? • Is this something that is hard to do without MBSE? Why? • Categorize the use case (dimension)

  6. Calibrate Use Case Abstraction • Review example use cases and the use case dimension sheet • By yourself, fill in one high-value use case sheet (* values only) (5 min) • Discuss use cases with people at your table. Select one candidate use case to discuss with the large group (5 min) • Each table shares one use case with large group. Purpose – Consistent level of abstraction (15 min) • Each group captures 5 use cases.

  7. Usability Presentation by Jen Narkevicius

  8. Philosophy • We want to measure usability in order to incentivize and reward developers for designing products that are easy to learn, efficient to use, tolerant of missteps, and provide a satisfactory experience for the user • Benefits to the community and the developers • Wider user populations • Broader and more consistent use • Better understanding of the models and their meaning • Increased sales

  9. Lund Usability Maxims(in descending order of importance) Lund, A. M. (1997). Expert ratings of usability maxims. Ergonomics in Design, 5(3), 15-20. A study of the heuristics design experts consider important for good design. Know the user. YOU are not the user Things that look the same should act the same/ Things that look different should act different The information for the decision must be there when the decision is needed Error messages should actually mean something to the user and tell the user how to fix the problem Every action should have a reaction Everyone makes mistakes, so every mistake should be fixable Don't overwhelm the user Consistency, consistency, consistency Minimize the need for a mighty memory Keep it simple The user should always know what is happening The more you do something, the easier it should be to do The user should control the system. The system should not control the user. The user is the boss and the system should show it Eliminate unnecessary decisions and illuminate the rest The best journey has the fewest steps. Shorten the distance between the user and the goal Users should be able to do what they want Alert users to an error before things get worse Users should always know how to find out what to do next Strive to empower the user, not speed up the system

  10. Definitions Usability = The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use or a quality attribute that assesses how easy user interfaces are perceived to be to use. Also refers to methods for improving ease-of-use during the design process. Task = activity performed by a single person that has a distinct beginning and end Function = a related set of tasks, some of which may be automated (i.e., performed by a computer). May consist of multiple people performing cooperative or collaborative tasks

  11. Measurable Usability Dimensions Ease of Learning Efficiency of Use (routine) Efficiency of Use (non-routine) Error Tolerance Subjective

  12. Ease of Learning • After providing some opportunity for a new user to learn how to perform a function (e.g., demonstration, exploration, documentation), assess how well the user can perform a related task • Near-transfer - very similar task • Far-transfer – different task that requires the user to draw inferences from what they learned • Motivation: Reward products that support transparent mental models and that provide consistency in the interface

  13. Efficiency of Use • Measured by number of steps (button clicks) or by the elapsed time to perform a well-defined function (set of tasks) • Measure for routine tasks and non-routine tasks • Motivation – Reward products that limit unnecessary steps and provide convenient shortcuts for routine tasks

  14. Subjective • Qualitative measure of how much the user liked using the product. Typically measured by questionnaire on a ranked scale. • Was it compelling? • Was it satisfying? • Would you want to use this product again? • Motivation: Rewards elegance, clarity, and user satisfaction

  15. Cautions • Complex products offer challenges for unbiased testing • Certainly need a combination of skilled users (perhaps even the product developer) and users that understand the intent of the tool but have no experience with the specific product • Beware of unintended consequences • “rats go after pellets” • Must select and carefully design the functions being tested to examine a reasonable breadth and depth of functionality • Resist the temptation to assess “Cadillac” functions before ensuring “Chevy” functions are covered

  16. Conclusion • History • Accomplishments • Concepts of Operation • Plan • Use Cases

More Related