1 / 55

DSS

DSS. Decision Support System Tutorial: An Instructional Tool for Using the DSS. What is the DSS?. Appropriate regulatory decisions for onsite technologies. How much (of what quality) data are needed before the assessment of actual datasets.

cherie
Download Presentation

DSS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DSS Decision Support System Tutorial: An Instructional Tool for Using the DSS

  2. What is the DSS? • Appropriate regulatory decisions for onsite technologies. • How much (of what quality) data are needed before the assessment of actual datasets. • A clearer understanding of assumptions made about the value given to differing types of datasets and relative weights of important data attributes, such as the number of samples, flow rates tested, etc. A decision support system that provides guidance regarding:

  3. DSS Introduction • Regulatory agencies along with the scientific community and other experts ultimately determine the value of data characteristics. • Assures there are no un-said assumptions, which can lead to error.

  4. How the DSS Works (1/8) • This approach lets you assess what types of studies will be most important for making the regulator decision. • It can be assessed even before the research studies are designed and implemented, if desired.

  5. How the DSS Works (2/8) • Once the studies have been completed the data collected are scored. • These data scores are combined with the data characteristic (also called data attributes) weights and the rankings for that type of study or dataset.

  6. How the DSS Works (3/8) • Once the requirement of score level is decided, then the specific endpoint decision must be defined. • A decision endpoint might be that “the pretreatment technology reduces BOD5 and TSS to less than 30 mg/L, 85 percent of the time when used in a New England climate.”

  7. How the DSS Works (4/8) • Different types of studies or datasets (journal articles, manufacturer datasets, regulatory compliance datasets, test center datasets) are ranked for that decision endpoint. • The value for each important data characteristic (number of samples, flow range tested, etc.) is evaluated and weighted. • This is done before everyone actually evaluates the real data submitted for consideration.

  8. How the DSS Works (5/8) • Expert panel approach is used. • On-site wastewater scientists must be included in the panel. • The panel sets a quantitative score level that must be reached once the datasets are evaluated.

  9. How the DSS Works (6/8) • The required quantitative score required for approval can vary (e.g., 8 pts, 16 pts, 24 pts) depending upon the type of approval. • A higher score (meaning more data of higher quality) would be required for unrestricted use than required for piloting of 5-15 systems.

  10. How the DSS Works (7/8) • The numbers are crunched and total score calculated for each study that was completed by the researchers. • One study is a called “Measurement Endpoint.” • The total scores for all of the measurement endpoints or research studies are added up together and then compared to the required score level for a particular type of approval (piloting, unrestricted use, etc.). • All of this is done by the expert panel.

  11. How the DSS Works (8/8) • The DSS uses 15 spreadsheets. • There are three basic types of spreadsheets. • The DSS is a Decision Support System, not a decision-making system so it only provides guidance. The regulatory agency, technical advisory group or other organization must still evaluate other intangibles before making the decision whether to approve the technology.

  12. DSS Spreadsheets • (1-3) Records study ranks, data attribute weights, and data scores for each expert panel member. Multiple copies of these three spreadsheets will be needed. • (4-11) Takes the data weighting values from individual panel members and uses them to calculate weights for the entire panel for each of the eight data quality/quantity attributes. • (12-15) Summary sheets and calculations of scores for measurement endpoints and the ultimate decision endpoint.

  13. 7 Steps to the Overall Approach (Steps 1-4) 1. Setting a final score suggested as adequate for regulatory decision-making. • Determining and defining the decision endpoint. (i.e., a certain pretreatment technology will provide BOD5 and TSS effluent levels 90% of the time < 30 mg/l when used at a single-family home.) 3. Ranking numerically each of 10 different types of studies. 4. Assigning numerical weights for each of eight data quality/quantity attributes relative to that particular decision endpoint prior to accessing data from any specific study. Note that the spreadsheets for these steps should be completed before any actual research results are evaluated.

  14. 7 Steps to the Overall Approach (contd.) 5. Evaluating data from specific studies of the technology in question. Here the actual research data is assessed to determine the value of that data for each of the eight data attributes for the decision endpoint and giving the research data numerical scores. 6. Summing the results of the calculations for data value and weight for all studies of that technology. Each study is a measurement endpoint. 7. Comparing the calculated scores to the predetermined set of standard values suggested in Step 1 for regulatory decision-making.

  15. Three basic types of approvals or regulatory decisions. Scores based on the expected point levels from various studies. These approval levels may be split further if desired. Max. possible point level for any one study is 8. Example: Piloting/testing confirmation use – 6 points. Extended innovative product use – 12 points. General accepted use – 24 points. Step 1: Numerical Scores for Decision-making

  16. A specific decision endpoint must be defined since the same data could be evaluated for different decision endpoints (e.g., performance in Arizona climate vs. in New England, or 95% removal of BOD vs. 70% removal). The selection of the decision endpoint will influence the value assigned to different data quality attributes. Therefore, specific conditions need to be clarified as part of a decision endpoint. The decision endpoint must be defined before proceeding to Step 3 and beyond. Step 2 : Defining a Decision Endpoint

  17. Step 3: Ranking Each Study • Spreadsheet #1. • Each panelist ranks 10 study types from 0.1 to 1.0 for the decision endpoint developed in Step 2. • A value of 1.0 is given to the most appropriate study type for the decision being made, 0.1 to the least appropriate study type. • Rankings from each of the expert panel members are averaged and used to determine the value for that type of study or dataset.

  18. Step 4: Weights of the Data Quality/Quantity Attributes • Each panel member fills out Spreadsheet #2. • Spreadsheet #2 allows each expert panel member to weight each of 8 different data attributes for each of the 10 study types. • Spreadsheets #4-#11 are where these panel assessments are averaged. • The expert panel assign values based on the intrinsic value of different data quality characteristics or attributes, not the actual data that is collected for a study. • The scaling system ranges from 1.0 to 8.0, with the score of 8.0 being the most valuable data characteristic. • These weights will change depending upon the decision endpoint that is being evaluated in Step 2.

  19. Step 5: Determining the Value for the Research Study Data • Before getting to this step, the predetermined score levels needed for approval and the decision endpoint have been carefully defined, the 10 study types ranked, and the 8 data attributes weighted. • Each panel member fills out Spreadsheet #3 for each research dataset that is submitted. • Assesses whether the data supports the decision endpoint or not. • Determined on a scale of 0.0 to 1.0 with 1.0 being the most supportive of the decision endpoint. • The data score and data attribute weight (for each of the 8 data attributes) will together give a good evaluation of whether the decision endpoint is supported by the data. • Spreadsheet #14 is where these assessments are brought together.

  20. Step 6: Summing the Results of the Calculations • Spreadsheet #15 • Quantitatively calculating the results of the expert panel evaluations. • The averages from spreadsheets #12 and #14 are then calculated to get the overall total point level that is used to assess the decision endpoint.

  21. Step 7: Calculated Scores vs. Predetermined Standards Use the numbers below from Step 1 and compare them to the overall total point levels obtained from the spreadsheets and calculations in Steps 2 through 6. • Piloting/testing confirmation use – 6 points. • Extended innovative product use – 12 points. • General accepted use – 24 points. Note: If there were 3 studies that resulted in total points of 5.1, 3.0, and 4.3 points, then the total calculated point level would be 12.4. This would support “Extended Innovative Use” (> 12.0), but not enough for “General Accepted Use.”

  22. Spreadsheet #1 Input Study Rankings • After a decision endpoint has been deciphered each panelist gets their own spreadsheet. • Each study type is given a rank between 0.1 and 1.0, according to its applicability as judged by the panelist. • Rankings are decided before any data is collected.

  23. Spreadsheet #1

  24. Spreadsheet #2 Data Attribute Weights • Compare each of the eight data attributes for each of the 10 study types, weigh the 8 data attributes within each study type on a scale of 0.0 to 8.0. • Calculate the relative weights for each type of study so they are now on a relative scale of 0.0 to 8.0. • The overall importance for each of the eight data attributes (weighted from 0.0 to 8.0) are now determined for each of the 10 study types for that particular decision endpoint.

  25. 8 Data Characteristics or Attributes • Performance data • Flow rates • Degree of replication • Experimental control • Environmental conditions • Level of O & M provided • Independence of the study • Degree of peer-review

  26. Spreadsheet #2

  27. Spreadsheet #3 Data Evaluation • Evaluate the data submitted. Each study that was conducted is a measurement endpoint. • Determine which of the 10 study types is appropriate for that dataset or study. • Data are evaluated by considering each of the 8 data attributes and scored specifically pertaining to that type of study; there is a different spreadsheet for each research study that was conducted. • The numeric scale ranges from 0.0 to 1.0. • An evaluation is made of the data submitted for each particular study or dataset for the decision endpoint at hand.

  28. Spreadsheet #3

  29. Spreadsheet #4: Performance Weight Summary • Each panelist’s ratings from spreadsheet #2 are recorded on spreadsheet #4. • Take the numbers from the first column on spreadsheet #2, Performance Data. • The average weight will then be automatically calculated by the program.

  30. Spreadsheet #4

  31. Spreadsheet #5: Flow Data Weight Summary • Each panelist’s ratings from each spreadsheet #2 are recorded on spreadsheet #5. • Take the numbers from the second column on spreadsheet #2, Flow Data. • The average weight will then be automatically calculated by the program.

  32. Spreadsheet #5

  33. Spreadsheet #6: Replication Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #6 . • Take the numbers from the third column on spreadsheet #2, Replication. • The average weight will then be automatically calculated by the program.

  34. Spreadsheet #6

  35. Spreadsheet #7: Experimental Control Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #7 . • Take the numbers from the fourth column on spreadsheet #2, Experimental Control. • The average weight will then be automatically calculated by the program.

  36. Spreadsheet #7

  37. Spreadsheet #8: Environmental Conditions Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #8 . • Take the numbers from the fifth column on spreadsheet #2, Environmental Conditions. • The average weight will then be automatically calculated by the program.

  38. Spreadsheet #8

  39. Spreadsheet #9: O & M Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #9. • Take the numbers from the sixth column on spreadsheet #2, O&M Conducted. • The average weight will then be automatically calculated by the program.

  40. Spreadsheet #9

  41. Spreadsheet #10: Third Party Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #10. • Take the numbers from the seventh column on spreadsheet #2, Third Party Data. • The average weight will then be automatically calculated by the program.

  42. Spreadsheet #10

  43. Spreadsheet #11: Peer Review Weight Summary • Each panelists ratings from each spreadsheet #2 are recorded on spreadsheet #11. • Take the numbers from the eighth column on spreadsheet #2, Peer Review. • The average weight will then be automatically calculated by the program.

  44. Spreadsheet #11

  45. Spreadsheet #12: Ranking Compilation • Take each panelist’s Spreadsheet #1 rankings and record them on Spreadsheet #12. • The average expert panel ranking score for that study type will be calculated automatically by the program.

  46. Spreadsheet #12

  47. Spreadsheet #13: Data Weight Compilation • Spreadsheet #13 records all of the averages from Spreadsheet #4 to Spreadsheet #11. • After all the averages from each data attribute are recorded the program then calculates the total of the averages for each study.

  48. Spreadsheet #13

  49. Spreadsheet #14: Data Score Compilation • For each data attribute enter data score in each row from Spreadsheet #3 for each expert panel member. • The program will then average the data score for each data attribute.

  50. Spreadsheet #14: Data Score Compilation • Each study type will have its own spreadsheet, therefore each panelist should have 10 Spreadsheet #14s. • This is true if each type of research study (the 10 study or dataset types) are conducted. • Rankings are assigned by each panelist for all 10 types of studies and weights given for all eight data attributes. • Actual data scores will only be assigned for the research studies that are submitted.

More Related