Panel Presentation Accuracy : A Trial Judge’s Perspective - PowerPoint PPT Presentation

coby
panel presentation accuracy a trial judge s perspective n.
Skip this Video
Loading SlideShow in 5 Seconds..
Panel Presentation Accuracy : A Trial Judge’s Perspective PowerPoint Presentation
Download Presentation
Panel Presentation Accuracy : A Trial Judge’s Perspective

play fullscreen
1 / 17
Download Presentation
Panel Presentation Accuracy : A Trial Judge’s Perspective
56 Views
Download Presentation

Panel Presentation Accuracy : A Trial Judge’s Perspective

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Panel PresentationAccuracy : A Trial Judge’s Perspective Hon. Elizabeth A. Jenkins September 13, 2005 Any views expressed in this presentation are solely those of the author and do not represent the opinions of any court or the judiciary as a whole. The cases cited are for illustrative purposes only.

  2. “[T]here are important differences between the quest for truth in the courtroom and the quest for truth in the laboratory. Scientific conclusions are subject to perpetual revision. Law, on the other hand, must resolve disputes finally and quickly.” Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 596-97 (1993).

  3. Daubert & the Judge’s Role • Under Daubert, the trial judge acts as a gatekeeper for the evidence permitted at trial. Daubert, 509 U.S. at 597. • “The trial judge [has] the task of ensuring that an expert’s testimony both rests on a reliable foundation and is relevant to the task at hand.” Daubert, 509 U.S. at 597.

  4. “Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.” Daubert, 509 U.S. at 596.

  5. “Weight and credibility are the province of the jury.” U.S. v. Davis, 103 F.3d 660, 674 (8th Cir. 1996).

  6. To determine whether a theory or technique is scientifically valid and will assist the trier of fact, the trial judge should ordinarily consider: 1. Whether the theory or technique can (and has been) tested; 2. Whether the it has been subjected to peer review; 3. The theory or technique’s known or potential rate of error and the existence and maintenance of standards controlling the technique’s operation; 4. The theory or technique’s general acceptance within the scientific community. Daubert, 509 U.S. at 593-94.

  7. “The frequency with which a technique leads to erroneous results will be [an] important component of reliability.” U.S. v. Downing, 753 F.2d 1224, 1239 (3rd Cir. 1985).

  8. What is an “unknown” error rate? • A technique’s error rate could be “unknown” if the expert does not know what the technique’s error rate is. • A technique’s error rate could be “unknown” if there have been no studies of the error rate.

  9. What are the consequences of an unknown error rate? • Evidence from linguist that defendant’s voice was not on tape-recordings excluded for lack of error rate information (among other things). U.S. v. Salimonu, 182 F.3d 63, 73-74 (1st Cir. 1999). • Evidence of “oncogene theory” that cancer was caused by toxic emissions excluded in part because error rates of the theory were unknown. Wills v. Amerada Hess Corp., 379 F.3d 32, 39 (2nd Cir. 2004). • But seeU.S. v. Bonds, 12 F.3d 540, 560 (6th Cir. 1993) (evidence as to the error rate of DNA matches of bloodstains was “troubling,” but evidence was admitted due to other indicia of reliability).

  10. What about error rates for the same technique? • Various studies of spectrographic voice identifications found widely differing error rates, some very high. U.S. v. Smith, 869 F.2d 348, 353 (7th Cir. 1989) (however, other indicia of reliability supported admission of the expert testimony).

  11. Inquiries into error rates will inevitably be case-specific and technique-specific.

  12. Expert testimony that the individual in a surveillance film was not the defendant excluded because it had a “potentially very high” rate of error. U.S. v. Dorsey, 45 F.3d 809, 815 (4th Cir. 1995). • Testimony on ink analysis technique used to determine age of document excluded for being only “moderately sensitive to error” and having a 32% error rate. Equal Opportunity Employment Commission v. Ethan Allen, Inc., 259 F.Supp.2d 625, 635 (2003).

  13. Extrapolating causation in humans from experimental animal studies has “an extraordinarily high rate of error” and the testimony was excluded. Wade-Greaux v. Whitehall Laboratories, Inc., 874 F.Supp. 1441, 1480 (D. V.I. 1994). • Much of the expert testimony on dosage of radiation received by populations close to Three Mile Island had a “potentially high rate of error” and was excluded. In re TMI Litigation Cases Consolidated II, 911 F.Supp. 775, 796 (M.D. Pa. 1996).

  14. Are error rates always applicable? • Known rate of error not applicable because expert testimony involved theories, not any particular technique. Sorensen By and Through Dunbar v. Shaklee Corp., 31 F.3d 638, 649 (8th Cir. 1994). • Experts’ assumptions which are not supported by the record have a great potential for error. Id.

  15. Maximizing effectiveness in a Daubert inquiry • Careful preparation by the parties and their expert for a Daubert hearing is critical. • Some courts will enter detailed orders prior to a Daubert hearing. See, e.g., In re TMI Consolidated Proceedings, 1995 WL 848519 (M.D. Pa. November 9, 1995). • Under Federal Rule of Evidence 702, a federal court may appoint an expert in any civil or criminal case.

  16. The consequences of a lack of preparation can be dire. • In one case, six of eight studies presented were given with insufficient or no evidence as to the study’s rate of error. In re TMI Litigation Cases Consolidated II, 911 F.Supp. 775 (M.D. Pa. 1996). Seven of these studies were excluded. Id. at 829-30.

  17. Questions for Further Thought • Should there be a relationship between the different burden of proof in civil and criminal cases and an acceptable error rate? • Should there be a relationship between the error rate of the scientific method and the purpose for which it is offered (i.e. identification, causation, mitigation, intent, or credibility, etc.)?