1 / 73

Henry Prakken July 9, 2019

AI & Law Summerschool 2019 Session 2.1.1: Models of legal argument: Logical and integrated approaches. Henry Prakken July 9, 2019. History of AI: symbolic vs. data centric. Symbolic/cognitive AI : programming explicit models of aspects of human cognition Advantage: transparent

clarencee
Download Presentation

Henry Prakken July 9, 2019

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI & Law Summerschool 2019Session 2.1.1:Models of legal argument: Logical and integrated approaches Henry Prakken July 9, 2019

  2. History of AI: symbolic vs. data centric • Symbolic/cognitive AI: programming explicit models of aspects of human cognition • Advantage: transparent • Disadvantage: ‘symbolic’ input needed • Statistical, data-centric AI: Automatic learning of patterns from data • Advantage: no ‘symbolic’ input needed • Disadvantage: not transparent

  3. 1. Abstract argumentation frameworks

  4. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Attack on conclusion

  5. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Attack on premise Injury caused by poor physical condition

  6. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Colleague is not credible Attack on inference Colleague says so Injury caused by poor physical condition C is friend of claimant

  7. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition Indirect defence C is friend of claimant

  8. Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition C is friend of claimant

  9. 1. An argument is Iniff all arguments defeating it are Out. 2. An argument is Outiff it is defeated by an argument that is In. Grounded semantics minimisesIn labelling Preferred semantics maximisesIn labelling Dung 1995 A B E D C P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

  10. Justification status of arguments A is justified if A is In in all labellings A is overruled if A is Out in all labellings A is defensible otherwise Cf. Gardner (1987): easy v. hard questions

  11. 1. An argument is Iniff all arguments defeating it are Out. 2. An argument is Outiff it is defeated by an argument that is In. Grounded semantics minimisesIn labelling Preferred semantics maximisesIn labelling Dung 1995 A B E D C P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

  12. A B Employer is liable Employer is not liable Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague is not credible Colleague says so Camera evidence Injury caused by poor physical condition E C is friend of claimant C D

  13. Employer is liable Employer is not liable IF employer breached duty of care & employee had work-related injury THEN employer is liable UNLESS employee was careless Employee was careless Rule 2 Employer breached duty of care Employee had work-related injury Rule 1 IF employee did not secure stairs THEN employee was careless IF NO safety instructions THEN Employer breached duty of care Employee did not secure stairs Employee was not careless No safety instructions Employee had no work-related injury IF witness W says P THEN P UNLESS W is not credible IF employee did not secure a normal stairs THEN employee was NOT careless IF an employee’s injury is caused by poor physical condition THEN employee had NO work-related injury Colleague is not credible Colleague says so It was a normal stairs IF witness W is friend of claimant THEN W is not credible Injury caused by poor physical condition C is friend of claimant

  14. 2. Structured argumentation frameworks

  15. Structured argumentation: ASPIC+ John Pollock Sanjay Modgil & me • Chain inferences into arguments with • Deductive inference rules • Premises guarantee conclusion • Defeasible inference rules • Premises create presumption for conclusion • Attack arguments with counterarguments • Undermining, rebutting, undercutting • See which attacks result as defeats with preferences • Apply Dung (1995) to arguments + defeat S. Modgil & H. Prakken, A general account of argumentation with preferences. Artificial Intelligence195 (2013): 361-397. A.J. Hunter (ed.), Tutorials on StructuredArgumentation. Special issue of Argument andComputation 5 (2014).

  16. Application 1: online intake of crime reports (This and next slides by Daphne Odekerken) Floris Bex M. Schraagen, B. Testerink, D. Odekerken & F. Bex, Argumentation-driven information extraction for online crime reports. In Proceedings of International Workshop on Legal Data Analytics and Mining (LeDAM 2018).

  17. The problem • Improve the intake of criminal reports on online trade fraud • fake webshops • malicious second-hand dealers on trading platforms • 40.000 reports are filed each year • Legal background: article 326 of Dutch Criminal Code“misleading through false contact details, deceptive tricks or an accumulation of lies”

  18. Knowledge model • ASPIC+ argumentation theory consisting of: • 26 observable facts • 46 inference rules • Based on legislation, case law and expert knowledge

  19. Information extraction • Extract observations from free text Fictitious example report I would like to report fraud. I recently saw a bicycle for sale at Marktplaats and contacted advertiser John Doe. Because he said that he lived in North Groningen, we agreed that he would send the bicycle to my home address in Maastricht. I paid him in good faith but still have not received the bike. Mr. Doe does not respond to my e-mails any more. I did some research and saw on his Facebook profile that he lives in Roermond. False location Paid Not delivered

  20. Argumentation graph: simplified example False website Not delivered Waited False location Not sent Product paid Deception Delivery failure Presumably fraud

  21. Argumentation graph: simplified example False website Not delivered Waited False location Not sent Product paid Deception Delivery failure Presumably fraud

  22. Argumentation graph: simplified example False website Not delivered Waited False location Not sent Product paid Deception Delivery failure Presumably fraud

  23. Argumentation graph: simplified example False website Not delivered Waited False location Not sent Product paid Deception Delivery failure Presumably fraud

  24. Application 2: Noise-induced hearing loss • Weightmans act for employers and their insurance companies and advise them when they face claims for Noise Induced Hearing Loss (NIHL) • Legal issue: is the hearing loss attributable to negligence on the part of the employer(s), or former employer(s), during the period of the claimant's employment? • Expertise-based knowledge model in ADFs (similar to ASPIC+) L. Al-Abdulkarim, K. Atkinson, T. Bench-Capon, S. Whittle, R. Williams & C. Wolfenden (2019): Noise induced hearing loss: Building an application using the ANGELIC methodology. Argument and Computation, 10(1): 5-22. 

  25. Argument(ation) schemes: general form But also critical questions Douglas Walton Premise 1, … , Premise n Therefore (presumably), conclusion

  26. Argument schemes in ASPIC+ • Argument schemes are defeasible inference rules • Critical questions are pointers to counterarguments • Some point to undermining attacks • Some point to rebutting attacks • Some point to undercutting attacks

  27. 3. Applications to case-based reasoning in factor-based domains

  28. Factor-based reasoning • In legal classification and interpretation there are often no clear rules • Often there only are factors: tentative reasons pro or con a conclusion • Often to different degrees • Factors are weighed in cases, which become precedents • But how do judges weigh factors? • And what if a new case does not perfectly match a precedent?

  29. Running example factors: misuse of trade secrets • Some factors pro misuse of trade secrets: • F2 Bribe-Employee • F4 Agreed-Not-To-Disclose • F6 Security-Measures • F15 Unique-Product • F18 Identical-Products • F21 Knew-Info-Confidential • Some factors con misuse of trade secrets: • F1 Disclosure-In-Negotiations • F16 Info-Reverse-Engineerable • F23 Waiver-of-Confidentiality • F25 Info-Reverse-Engineered HYPO Ashley & Rissland 1985-1990 CATO Aleven & Ashley 1991-1997

  30. HYPO Ashley & Rissland 1987-1990 • Representation language: • Cases: decision (p or d) + p-factors and d-factors • Current Fact Situation: factors • Arguments: • Citing (for its decision) a case on its similarities with CFS • Distinguishing a case on its differences with CFS • Taking into account which side is favoured by a factor

  31. Citing precedent • Mason v Jack Daniels Distillery (Mason) – undecided. • F1 Disclosure-In-Negotiations (d) • F6 Security-Measures (p) • F15 Unique-Product (p) • F16 Info-Reverse-Engineerable (d) • F21 Knew-Info-Confidential (p) • Bryce and Associates v Gladstone (Bryce) – plaintiff • F1 Disclosure-In-Negotiations (d) • F4 Agreed-Not-To-Disclose (p) • F6 Security-Measures (p) • F18 Identical-Products (p) • F21 Knew-Info-Confidential (p) Plaintiff cites Bryce because of F6,F21

  32. Distinguishing precedent • Mason v Jack Daniels Distillery (Mason) – undecided. • F1 Disclosure-In-Negotiations (d) • F6 Security-Measures (p) • F15 Unique-Product (p) • F16 Info-Reverse-Engineerable (d) • F21 Knew-Info-Confidential (p) • Bryce and Associates v Gladstone (Bryce) – plaintiff • F1 Disclosure-In-Negotiations (d) • F4 Agreed-Not-To-Disclose (p) • F6 Security-Measures (p) • F18 Identical-Products (p) • F21 Knew-Info-Confidential (p) Plaintiff cites Bryce because of F6,F21 Defendant distinguishes Bryce because of F4,F18 and F16

  33. Counterexample • Mason v Jack Daniels Distillery – undecided. • F1 Disclosure-In-Negotiations (d) • F6 Security-Measures (p) • F15 Unique-Product (p) • F16 Info-Reverse-Engineerable (d) • F21 Knew-Info-Confidential (p) • Robinson v State of New Jersey – defendant. • F1 Disclosure-In-Negotiations (d) • F10 Secrets-Disclosed-Outsiders (d) • F18 Identical-Products (p) • F19 No-Security Measures (d) • F26 Deception (p) Defendant cites Robinson because of F1

  34. Distinguishing counterexample • Mason v Jack Daniels Distillery – undecided. • F1 Disclosure-In-Negotiations (d) • F6 Security-Measures (p) • F15 Unique-Product (p) • F16 Info-Reverse-Engineerable (d) • F21 Knew-Info-Confidential (p) • Robinson v State of New Jersey – defendant. • F1 Disclosure-In-Negotiations (d) • F10 Secrets-Disclosed-Outsiders (d) • F18 Identical-Products (p) • F19 No-Security Measures (d) • F26 Deception (p) Defendant cites Robinson because of F1 Plaintiff distinguishes Robinson because of F6,F15,F21 and F10,F19

  35. K.D. Ashley. Modeling Legal Argument: Reasoning with Cases and Hypotheticals. MIT Press, Cambridge, MA, 1990. Plaintiff: I should win because as in Bryce, which was won by plaintiff, I took security measures and defendant knew the info was confidential Defendant: I should win because as in Robinson, which was won bydefendant, plaintiff made disclosuresduringthenegotiations Defendant: Unlikein the present case, in Brycedefendant had agreednottodiscloseandtheproductswereidentical Defendant: UnlikeBryce, in the present case the info is reverse engineerable Plaintiff: Unlike in Robinson, I took security measures, and defendant knew the info was confidential

  36. Basic scheme for reasoning with two-valued factors AS2: ThePro-factors of current are P TheCon-factors of current are C P arepreferred over C Current should be decided Pro H. Prakken & G. Sartor, Modelling reasoning with preferences in a formal dialogue game. Artificial Intelligence and Law6 (1998): 231-287. H. Prakken, A. Wyner, T. Bench-Capon & K. Atkinson, A formalisation of argumentation schemes for legal case-based reasoning in ASPIC+. Journal of Logic and Computation 25 (2015): 1141-1166.

  37. Preferences from precedents (1) AS2: ThePro-factors of precedent are P TheCon-factors of precedent are C precedent was decided Pro P arepreferred over C Limitation 1: the current case will often not exactly match a precedent

  38. A fortiori reasoning with two-valued factors AS3: P arepreferred over C P+arepreferred over C- Limitation 2: not all differences with a precedent will make a current case stronger P+ = P plus zero or more additional pro-factors C- = C minus zero or more con factors

  39. Vincent Aleven 1991-1997 (snapshot of)CATO Factor Hierarchy Misuse of Trade Secret (p) F120: Info legitimately obtained elsewhere (d) F101: Info Trade Secret (p) F104: Info valuable (p) F102: Efforts to maintain secrecy (p) F4: Agreed not to disclose (p) F1: Disclosures in negotiations (d) F6: Security measures (p) F15: Unique product (p)

  40. V. Aleven. Using background knowledge in case-based legal reasoning: a computational model and an intelligent learning environment. Artificial Intelligence 150:183-237, 2003. Distinguishing Misuse of Trade Secret (p) F120: Info legitimately obtained elsewhere (d) F101: Info Trade Secret (p) F104: Info valuable (p) F102: Efforts to maintain secrecy (p) F4: Agreed not to disclose (p) F1: Disclosures in negotiations (d) F6: Security measures (p) F15: Unique product (p)

  41. Emphasising distinctions Misuse of Trade Secret (p) F120: Info legitimately obtained elsewhere (d) F101: Info Trade Secret (p) F104: Info valuable (p) F102: Efforts to maintain secrecy (p) F4: Agreed not to disclose (p) F1: Disclosures in negotiations (d) F6: Security measures (p) F15: Unique product (p)

  42. Downplaying distinctions Misuse of Trade Secret (p) F120: Info legitimately obtained elsewhere (d) F101: Info Trade Secret (p) F104: Info valuable (p) F102: Efforts to maintain secrecy (p) F4: Agreed not to disclose (p) F1: Disclosures in negotiations (d) F6: Security measures (p) F15: Unique product (p)

  43. Exploiting factor hierarchies (1):current misses pro factor AS4: P1 arepreferred over C P2 substitutesP1 P2 arepreferred over C • Def1: • Factor set P2 substitutesfactor set P1 iff • For all factors p1 in P1 that are not in P2 there exists a factor p2 in P2 that substitutesp1 • Def2: • Factor p2 substitutes factor p1 iff • p1 instantiates abstract factor p3 and • p2 instantiates abstract factor p3

  44. Current should be decided Pro ThePro-factors of Current are {F6,F21} {F6,F21} > {F1} TheCon-factors of Current are {F1} {F4,F21} > {F1} F6 substitutes F4 F4 instantiates F102 F6 instantiates F102 ThePro-factors of Precedent are {F4,F21} Precedent was decided Pro TheCon-factors of Precedent are {F1}

  45. Current should be decided Pro ThePro-factors of Current are {F6,F21} {F6,F21} > {F1} TheCon-factors of Current are {F1} {F4,F21} > {F1} F6 substitutes F4 F4 instantiates F102 F6 instantiates F102 What if we cannot derive the required preferences? Then argue for them!

  46. John Horty’s model of precedential constraint J. Horty, Rules and reasons in the theory of precedent. Legal Theory 17 (2011): 1-33. J. Horty & T.J.M. Bench-Capon, A factor-based definition of precedential constraint. Artificial Intelligence and Law 20 (2012): 181-214.

  47. Refining and reshaping rules through precedents • Case 1: Daughter has finished dinner, may watch TV • Rule: If you finished dinner, you may watch TV • Case 2: Son has finished dinner, has not finished homework, may not watch TV • New rule: If you finished dinner, you may watch TV, unless you did not finish your homework • (But father could have allowed son to watch TV) • Case 3: Daughter has finished dinner, has not finished homework, may not watch TV • (Father could not have allowed daugher to watch TV) Decision allowed but not forced Decision forced

  48. Precedential constraint:consistency of preferences • A preference relation < on factor sets is consistent if and only if there are no factor sets X and Y such that both X < Y and Y < X.

  49. Preferences from precedents AS2: ThePro-factors of precedent are P TheCon-factors of precedent are C precedent was decided Pro P arepreferred over C

  50. A fortiori reasoning with two-valued factors AS3: P arepreferred over C P+arepreferred over C- P+ = P plus zero or more additional pro-factors C- = C minus zero or more con factors

More Related