1 / 25

Data Analysis – Workshop

Data Analysis – Workshop . Decision Making and Risk Spring 2006 Partha Krishnamurthy. Outline. Introduction to Decision Analysis Review Decision Trees Terminology/Orientation Introduction to DA software Workshop (hands-on) Basic decision tree Sensitivity analysis

maree
Download Presentation

Data Analysis – Workshop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Analysis – Workshop Decision Making and Risk Spring 2006 Partha Krishnamurthy

  2. Outline • Introduction to Decision Analysis • Review Decision Trees • Terminology/Orientation • Introduction to DA software • Workshop (hands-on) • Basic decision tree • Sensitivity analysis • Incorporating costs and utilities

  3. Basic Decision Tree • Nodes: • Decision node, • Chance node, • Terminal node, • Branches, connect nodes • Outcomes, labels, probabilities (fixed, variable) • Values/Utilities

  4. Data Analysis Conventions Distal/Downstream Upstream/Proximal What is the relevance of this distinction?

  5. Analysis – Basic Principle • Evaluation of trees typically proceeds from the terminal nodes to the decision nodes, i.e., upstream. • At each chance node, expected value is calculated in the usual fashion; EVj=∑i=1 to npij*Uij • The expected value serves as the “Utility” at the chance node, as the analysis proceeds upstream. • This process is called average-out/fold-back.

  6. The Use of Variables • Both outcomes (payoffs) and probabilities can be specified as raw numbers or as variables. • Specifying them as variables facilitate sensitivity analysis. • Also, you can specify correlations among them. • Let us look at a simple decision tree.

  7. Market Intervention Example • Problem: Declining sales. • Possibilities: Product out-of-sync with market, or nothing wrong (seasonality, things will get better). • Interventions: Launch promotion or do nothing. • Data: • Outcomes • 22.5m if it succeeds with/without promo. • 12.5 if it fails without promo and 10m if it fails with promo • Probabilities • p(outofsync)= .3 • p(promoeffective_if outofsync)=.86 • p(failure_outofsync) = 0.90

  8. Market Intervention Tree

  9. Marketing Intervention Example

  10. Remainder of the Session • We will do the following with the castle decision problem. • Structure the tree. • Perform sensitivity analysis. • All of us walk through the same decision problem, step-by-step.

  11. Reading Break • Take 10 minutes and read the first three pages of the castle decision problem.

  12. Application Break • Take 10 minutes to answer the questions in hand-out.

  13. Structuring the Tree in DATA • In this segment, we are going to follow the steps in the handout (pages 4 through 8).

  14. Sensitivity Analysis • What if our assumptions about the probabilities and/payoffs are different? How would the decision change? • Conceptually, what does sensitivity analysis help us accomplish?

  15. Mechanics • Specify how many different variables you want to analyze. • Specify the range you want analyzed. • Specify the number of intervals. • DATA computes expected values at the intervals only, and performs linear interpolation of expected values in between. • More intervals, smoother curve. • More intervals, significant computational resources.

  16. Performing Sensitivity Analysis • In this segment, we will go through the steps in the handout session 2.

  17. Insights from Sensitivity Analysis • Can you generate some insights about the castle decision by performing different types of sensitivity analyses?

  18. Modeling Costs Separately • Previously, we modeled payoff for each outcome state as a single net revenue. • It is more reasonable to think of the payoffs has having two components, a revenue component and a cost component. • Your decision may be sensitive to your revenue assumptions as well as cost assumptions. • Modeling your decision as a cost-benefit tree allows you to gauge the importance of revenues and cost assumptions separately.

  19. Decision Context • Refer to Castle product introduction decision. • Payoff for each outcome state has two components, a revenue component and a cost component. • We modeled only the revenue component. • Our goal is to assess what happens to the decision if the cost of introduction (now and later) and development cost if product fails in 6 months are modeled explicitly.

  20. Modeling Strategy • First change the calculation method to “Benefit-Cost”. • Second, specify costs of introduction now and later, and cost of development if product fails in 6 months. • Initially set all costs to zero at the root node and recover the model expected value (should be the same as before). • Later, use sensitivity analysis to find out the impact of these three variables on the decision.

  21. Application Break • Go back to the castle decision tree (if you have saved the tree, good, if not, let me know, I will give you the file). • Follow the steps in cost-benefit modeling from hand out session 3.

  22. Generalized Multi-Attribute Models • What if every outcome state had payoffs that are not necessarily like costs and benefits? • Notice that they are traded-off using a positive and a negative coefficient, equally important and opposite in effect. • This is where multi-attribute models come into play. • You can tell the DA program, how to combine multiple payoffs, and how to evaluate them.

  23. Warrant for Multi-Attribute Models • Each outcome has more than one attribute. For example: • Revenue • Market Share • Strategic Fit • Profit • Decisions have to tackle multiple attributes at the same time.

  24. Market Segmentation Decision - Example • Segments under consideration • Cash Cow • 10 on revenue, 5 on market share growth, 3 on strategic fit, 6 on profitability. • Star of the Future, Dog for now. • 3 on revenue, 7 on market share growth, 8 on strategic fit, 2 on profitability. • Multi-segment • 5 on revenue, 4 on market share growth, 7 on strategic fit, 6 on profitability.

  25. Modeling Strategy • Create three branches off of the decision node, each for one segment. • Define each choice as a terminal node, and enter the four payoffs for each choice. • Change model to generalized multi-attribute. • Tell DATA how to combine the attributes. • Set importance of each attribute • Specify each attribute importance coefficient as a variable. • Set the value for the each attribute importance, say 0.25, equally weighted. • Then perform sensitivity analyses to see how shifting decision criteria changes decisions.

More Related