1 / 75

Tutorial on AI & Law

Explore the use of artificial intelligence in predicting judicial decisions, its limitations, and the history of AI in the field of law.

ljust
Download Presentation

Tutorial on AI & Law

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. https://people.dsv.su.se/~jpalme/ reports/right-wrong.html Tutorial on AI & Law Henry Prakken SIKS course Trends and Topics in Multi-Agent Systems Utrecht, The Netherlands June 6th, 2019

  2. Some press on AI & Law Lawyers could be the next profession to be replaced by computers (cnbc.com) AI is doing legal work. But it won’t replace lawyers, yet. (nytimes.com) The robot lawyers are here – and they’re winning (bbc.com) ”Artificially Intelligent ‘judge’ developed which can predict court verdicts with 79% accuracy” (…) “Computer scientists … developed an algorithm which can not only weigh up legal evidence, but also moral considerations.” (Daily Telegraph 24 Oct 2016)

  3. N. Aletras, D. Tsarapatsanis, D. Pietro-Preotiuc & V. Lampos (2017). Predicting judicial decisions of the European Court of Human Rights: a natural language processing perspective. PeerJ. CompSci 2e:93, DOI 10.7717/peerj-cs.93 The ECHR ‘predictor’ • Trained on full text of decisions concerning three articles in the European Convention on Human Rights. • TASK: did the court rule that article X was violated? • Results: System’s answer correct in 79% of the cases. • But: • Chance gives 50% • “Violation” scores 84% • The system does not predict outcomes • It needs most of the decision to be predicted • The system cannot explain its answers • Shallow statistical NLP

  4. M. Medvedeva, M. Vols & M. Wieling (2018). Judicial decisions of the European Court of Human Rights: looking into the crystal ball. In Proceedings of the Conference on Empirical Legal Studies in Europe, 2018. Follow-up research • Most relevant word combinations: • For ‘violation’: • Death penalty, that the applicant, the public prosecutor, … • For `No violation’: • The first applicant, district prosecutor office, the urus martan, …

  5. D.M. Katz, M.J. Bommarito & J. Blackman (2017), ‘A general approach for predicting the behavior of the Supreme Court of the United States’. PLoS ONE 12(4): e0174698 Predicting outcomes of SCOTUS decisions • Predicting whether SCOTUS overturns decision of lower court or not • On the basis of a database on SCOTUS: • Which presidents appointed the judges? • Personal data about judges • Trends in SCOTUS decisions • … • 70% of decisions correctly predicted • But chance is 50% correct • Most data are not about the merits of the case

  6. Judicial decision making • Judges don’t predict but decide • on the basis of authoritative sources • justifying their decisions • Having heard the parties • Can this be modelled with AI systems?

  7. 1. History of AI & Law

  8. Institutional history • Florence conferences 1982,1985,1989 • ICAIL conferences since 1987 • JURIX conferences since 1988 • AI & Law journal since 1992 • … • Two landmark papers: • Taxman (Thorne McCarty, 1977): precedents • British Nationality Act (Marek Sergot et al., 1985): legislation Thorne McCarty Marek Sergot

  9. History of AI: symbolic vs. data centric • Symbolic/cognitive AI: programming explicit models of aspects of human cognition • Advantage: transparent • Disadvantage: ‘symbolic’ input needed • Statistical, data-centric AI: • Automatic learning of patterns from data

  10. Some history on cognitive, symbolic AI •  1950 - 1970: modelling general intelligence • Newel & Simon’s General Problem Solver •  1970 - 1990: modelling expertise in limited domains • ‘Expert systems’, later ‘knowledge-based systems’ • Knowledge about a problem domain • Reasoning mechanisms for solving decision problems • E.g. MYCIN (diagnosis and treatment of infection diseases) • Since  1980: optimism about legal applications: • Model the rules in logic, reason logically

  11. Three levels in legal decision making • Determining the facts of the case (legal proof) • Classifying the facts under the conditions of a statutory rule (legal interpretation) • Applying the statutory rule

  12. “Vehicles are not allowed in the park” • Facts: evidence problems • Legal conditions: general terms • Legal rules: exceptions, rule conflicts, … Legal reasoning is argumentation

  13. 2a. Modelling legal reasoning – ‘deductive’ rule-based systems

  14. Legal knowledge-based systems in practice • Quite a few ‘deductive’ rule-based systems in public administration • Don’t automate legal reasoning, but automate the logic of regulations • Early days: proof of facts, classification and interpretation largely left to user • Currently: • facts often taken from case files and government databases • Policy or interpretation rules added • (Almost?) non-existent in court and advocacy

  15. Rules: Oracle Policy Automation Slide by Giovanni Sartor

  16. AI & Law research on rule-based reasoning (1) • Much research on the logical structure of rules and regulations • Deontic, temporal and action logic • Rules and exceptions, conflicting rules (nonmonotonic logic) • … • Very few practical applications Guido Governatori H. Prakken & G. Sartor, Law and logic: a review from an argumentation perspective. Artificial Intelligence227 (2015): 214-245.

  17. AI & Law research on rule-based reasoning (2) Trevor Bench-Capon 1987 • Research on development methods • Trevor Bench-Capon et al. (1987-1994) • Use of ontologies • Intermediate KR formats • ‘Isomorphic’ representation (mirror structure of regulations, keep track of relations between source text and computational model) • Benefits validation, maintenance and explanation • Tom van Engers et al. (2001 -) • … Tom van Engers T.J.M. Bench-Capon & F.P. Coenen, Isomorphism and legal knowledge-based sytems. Artificial Intelligence and Law, 1 (1992): 65-86.

  18. 2b. Modelling legal reasoning – Formal models of argumentation

  19. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Attack on conclusion

  20. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Attack on premise Injury caused by poor physical condition

  21. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Colleague is not credible Attack on inference Colleague says so Injury caused by poor physical condition C is friend of claimant

  22. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition Indirect defence C is friend of claimant

  23. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition C is friend of claimant

  24. 1. An argument is Iniff all arguments defeating it are Out. 2. An argument is Outiff it is defeated by an argument that is In. Grounded semantics minimisesIn labelling Preferred semantics maximisesIn labelling Dung 1995 A B E D C P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

  25. A B Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition E C is friend of claimant C D

  26. Employer is liable Employer is not liable IF employer breached duty of care & employee had work-related injury THEN employer is liable UNLESS employee was careless Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 IF employee did not secure stairs THEN employee was careless IF NO safety instructions THEN Employer breached duty of care Employee did not secure stairs Employee was not careless No safety instructions Employee had no work-related injury IF witness W says P THEN P UNLESS W is not credible IF employee did not secure a normal stairs THEN employee was NOT careless IF an employee’s injury is caused by poor physical condition THEN employee had NO work-related injury Colleague is not credible Colleague says so It was a normal stairs IF witness W is friend of claimant THEN W is not credible Injury caused by poor physical condition C is friend of claimant

  27. Structured argumentation: ASPIC+ Giovanni Sartor Sanjay Modgil & me • Chain inferences into arguments with • Deductive inference rules • Premises guarantee conclusion • Defeasible inference rules • Premises create presumption for conclusion • Attack arguments with counterarguments • See which attacks result as defeats with preferences • Apply Dung (1995) to arguments + defeat S. Modgil & H. Prakken, A general account of argumentation with preferences. Artificial Intelligence195 (2013): 361-397. A.J. Hunter (ed.), Tutorials on StructuredArgumentation. Special issue of Argument andComputation 5 (2014).

  28. 2c. Modelling legal reasoning – Reasoning about evidence

  29. Models of proof that leave room for uncertainty • Argumentation • Construct an argument for the statement to be proven from the available evidence • Defend this argument against all possible counterarguments • Story-based models • Construct stories that explain the evidence • Choose the best • The most coherent, giving the best explanation of the evidence • Bayesian models • Apply Bayesian probability theory to determine how probable the charge is given the evidence

  30. My favourite approach • There is no single general model of rational legal proof • A toolbox or hybrid approach is needed • Start with constructing alternative scenario’s • Then zoom in on details with argumentation or Bayes F.J. Bex, A.R. Mackor & H. Prakken (eds), Models of Rational Proof in Criminal Law. Special issue of Topics in CognitiveScience, toappear 2019.

  31. AI support for legal proof: the KA bottleneck • Required knowledge (common sense!) is hard to manually acquire and code • And too diverse to automatically learn from natural-language sources • Some limited applications (e.g. BNs for frequent types of forensic evidence) • Most AI applications are ‘pedagogic’ or ‘mind mapping’

  32. 2d. Modelling legal reasoning – Case-based reasoning in factor-based domains

  33. Factor-based reasoning • In legal classification and interpretation there are often no clear rules • Often there only are factors: tentative reasons pro or con a conclusion • Often to different degrees • Factors are weighed in cases, which become precedents • But how do judges weigh factors? • And what if a new case does not perfectly match a precedent?

  34. Example from US law: misuse of trade secrets • Some factors pro misuse of trade secrets: • F4 Agreed-Not-To-Disclose • F6 Security-Measures • F18 Identical-Products • F21 Knew-Info-Confidential • … • Some factors con misuse of trade secrets: • F1 Disclosure-In-Negotiations • F16 Info-Reverse-Engineerable • … HYPO Ashley & Rissland 1985-1990 CATO Aleven & Ashley 1991-1997 Vincent Aleven

  35. HYPO Ashley & Rissland 1987-1990 • Representation language: • Cases: decision (p or d) + p-factors and d-factors • Current Fact Situation: factors • Arguments: • Citing (for its decision) a case on its similarities with CFS • Distinguishing a case on its differences with CFS • Taking into account which side is favoured by a factor

  36. HYPO Example C1 (p)C2 (d) CFS p1 p2 p3 d1 d2 p1 p2 p3 d1 d2 p1 p4 d2 p2 p3 p4 d2 d3

  37. HYPO Example C1 (p)C2 (d) CFS p1 p2 p3 d1 d2 p1p2 p3 d1 d2 p1 p4 d2 Distinguish! p2 p3 p4 d2d3 Distinguish!

  38. HYPO Example C1 (p)C2 (d) CFS p1 p2 p3 d1 d2 p1 p2 p3 d1 d2 p1 p4 d2 p2 p3 p4 d2 d3

  39. HYPO Example C1 (p)C2 (d) CFS p1 p2 p3 d1 d2 p1 p2 p3 d1 d2 p1 p4 d2 p2 p3 p4 d2 d3 Distinguish!

  40. K.D. Ashley. Modeling Legal Argument: Reasoning with Cases and Hypotheticals. MIT Press, Cambridge, MA, 1990. Plaintiff: I should win because as in Bryce, which was won by plaintiff, I took security measures and defendant knew the info was confidential Defendant: I should win because as in Robinson, which was won bydefendant, plaintiff made disclosuresduringthenegotiations Defendant: Unlikein the present case, in Brycedefendant had agreednottodiscloseandtheproductswereidentical Defendant: UnlikeBryce, in the present case the info is reverse engineerable Plaintiff: Unlike in Robinson, I took security measures, and defendant knew the info was confidential

  41. Ownership of wild animals Pierson v Post: Plaintiff is hunting a fox on open land. Defendant kills the fox. Keeble v Hickersgill: Plaintiff is a professional hunter. Lures ducks to his pond. Defendant scares the ducks away Young v Hitchens: Plaintiff is a professional fisherman. Spreads his nets. Defendant gets inside the nets and catches the fish. Slide by Trevor Bench-Capon

  42. Arguing Young Defendant (analogy): I should win because as in Pierson, which was won by defendant, plaintiff was not hunting on his own land and had not caught the animal Plaintiff (counterexampe): I should win sincemy case is similartoKeeble, which was won byplaintiffbecause he was pursuing his livelihood even though he had notcaught the animal Plainttiff (distinguishing Pierson): Unlikethe plaintiff in Pierson, I was pursuingmylivelihood Defendant (distinguishing Keeble): Unlikedefendant in Keeble, I amalsopursuingmylivelihood, andunlike in Keebleplaintiff is nothunting on his own land. K.D. Ashley. Modeling Legal Argument: Reasoning with Cases and Hypotheticals. MIT Press, Cambridge, MA, 1990.

  43. H. Prakken & G. Sartor, Modelling reasoning with preferences in a formal dialogue game. Artificial Intelligence and Law6 (1998): 231-287 K.D. Atkinson, T.J.M. Bench-Capon, and P. McBurney. Arguing about cases as practical reasoning. In Proceedings of the Tenth International Conference on Articial Intelligence and Law, 35-44, New York, 2005. ACM Press. Factor preferences from cases • Pierson – won by defendant {p1} < {d1,d2,d3} • Defendant not pursuing livelihood (p1) • Plaintiff not pursuing livelihood (d1) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) • Keeble – won by plaintiff {p1,p2,p3} > {d3} • Defendant not pursuing livelihood (p1) • Plaintiff pursuing livelihood (p2) • Plaintiff on own land (p3) • Plaintiff had not caught animal (d3) • Young – (won by defendant) {p2} ? {d2,d3,d4} • Defendant pursuing livelihood (d4) • Plaintiff pursuing livelihood (p2) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3)

  44. Trevor Bench-Capon & Katie Atkinson 2009(?) Values promoted in the wild animals cases • Pierson – won by defendant • Defendant not pursuing livelihood (p1) • Plaintiff not pursuing livelihood (d1) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) promotes certainty • Keeble – won by plaintiff • Defendant not pursuing livelihood (p1) • Plaintiff pursuing livelihood (p2) • Plaintiff on own land (p3) • Plaintiff had not caught animal (d3) promotes certainty • Young – (won by defendant) • Defendant pursuing livelihood (d4) • Plaintiff pursuing livelihood (p2) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) promotes certainty

  45. Values promoted in the wild animals cases • Pierson – won by defendant • Defendant not pursuing livelihood (p1) • Plaintiff not pursuing livelihood (d1) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) promotes certainty • Keeble – won by plaintiff • Defendant not pursuing livelihood (p1) • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff on own land (p3) • Plaintiff had not caught animal (d3) promotes certainty • Young – (won by defendant) • Defendant pursuing livelihood (d4) promotes economy • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3)promotes certainty

  46. Values promoted in the wild animals cases • Pierson – won by defendant • Defendant not pursuing livelihood (p1) • Plaintiff not pursuing livelihood (d1) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) promotes certainty • Keeble – won by plaintiff • Defendant not pursuing livelihood (p1) • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff on own land (p3) promotes property • Plaintiff had not caught animal (d3) promotes certainty • Young – (won by defendant) • Defendant pursuing livelihood (d4)promotes economy • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3)promotes certainty

  47. Values promoted in the wild animals cases • Pierson – won by defendant{} < {certainty} • Defendant not pursuing livelihood (p1) • Plaintiff not pursuing livelihood (d1) • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3) promotes certainty • Keeble – won by plaintiff{economy, property} > {certainty} • Defendant not pursuing livelihood (p1) • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff on own land (p3) promotes property • Plaintiff had not caught animal (d3) promotes certainty • Young – (won by defendant) {economy} < {economy, certainty} • Defendant pursuing livelihood (d4)promotes economy • Plaintiff pursuing livelihood (p2) promotes economy • Plaintiff not on own land (d2) • Plaintiff had not caught animal (d3)promotes certainty

  48. Other work • Multi-valued factors (Rissland & Ashley) • Relations between factors (Vincent Aleven) • Precedential constraint (John Horty) • Learning value preferences from cases (Matthias Grabmair) • … John Horty Matthias Grabmair

  49. Legal argumentation systems: the KA bottleneck • Realistic models of legal reasoning • argumentation with precedents, balancing reasons or values, … • But hardly applied in practice: • Required knowledge is hard to manually cquire and code • Is NLP the solution? • Learn everything from case law and law journals

  50. 3a. Data-centric approaches – Extracting information from texts

More Related