1 / 26

Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning

ECAI 2008. Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning. Martin Mozina, Matej Guid, Jana Krivec, Aleksander Sadikov and Ivan Bratko. Faculty of Computer and Information Science University of Ljubljana, Slovenia.

elliot
Download Presentation

Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECAI 2008 Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning Martin Mozina, Matej Guid, Jana Krivec, Aleksander Sadikov and Ivan Bratko Faculty of Computer and Information Science University of Ljubljana, Slovenia

  2. Motivation for Knowledge Acquisitionwith Argument Based Machine Learning • Knowledge Acquisition is a major bottleneck in building knowledge bases. • domain experts find it hard to articulate their knowledge • Machine Learning is a potential solution, but has weaknesses Machine Learning & Knowledge Acquisition • Problem: Models are not comprehensible to domain experts • mostly statistical learning (not symbolic) • inducing spurious concepts (e.g. overfitting) • Combination of domain expert andmachine learning would yield best results • learn symbolic models • exploit experts’ knowledge in learning

  3. Combining Machine Learning and Expert Knowledge • Expert provides background knowledge for ML • Expert validates and revises induced theory • Iterative procedure: Experts and ML improve the model in turns IF ... THEN ... IF ... THEN ... ...

  4. Combining Machine Learning and Expert Knowledge • Expert provides background knowledge for ML • Expert validates and revises induced theory • Iterative procedure: Experts and ML improve the model in turns IF ... THEN ... IF ... THEN ... ...

  5. Combining Machine Learning and Expert Knowledge • Expert provides background knowledge for ML • Expert validates and revises induced theory • Iterative procedure: Experts and ML improve the model in turns IF ... THEN ... IF ... THEN ... ... ABML

  6. Definition of Argument Based Machine Learning • Learning with background knowledge: • INPUT: learning examples E, background knowledge BK • OUTPUT: theory T, T and BK explain all ei from E • Argument Based Machine Learning: • INPUT: learning examples E, arguments aigiven to ei (from E) • OUTPUT: theory T, T explains ei with arguments ai BK,T ei T ei ai

  7. Argument Based Rule Learning • Classicrule learning: IF HairColor = Blond THEN CreditApproved = YES • Possible argument: Miss White received credit (CreditApproved=YES)because she has a regular job (RegularJob=YES). • AB rule learning (possible rule): IF RegularJob=YESAND AccountStatus = Positive THEN CreditApproved = YES

  8. Formal definition of Argumented Examples • Argumented Example (A, C, Arguments): • A; attribute-value vector [e.g. RegularJob=YES,Rich=NO, ...] • C; class value [e.g. CreditApproved=YES] • Arguments; a set of arguments Arg1, ..., Argn for this example • Argument Argi : • Positive argument: C because Reasons • Negative Argument: C despite Reasons • Reasons: a conjunction of reasons r1, ..., rm

  9. ABCN2 • ABCN2 = extension of CN2 rule learning algorithm (Clark,Niblett 1991) • Extensions: • Argument Based covering: • All conditions in R are true for E • R is consistent with at least one positive argument of E. • R is not consistent with any negative argument of E. • Evaluation: Extreme Value Correction (Mozina et al. 2006) • Probabilistic covering (required for Extreme Value Correction)

  10. Interactions between expert and ABML learn data set Learn a hypothesis with ABML. Find the most critical example. (if none found, stop procedure) Expert explains the example. Argument is added to the example. Return to step 1. Argument ABML critical example What if expert’s explanation is not good enough?

  11. Interactions between expert and ABML Expert explains example. Add argument to example Discover counter examples (if none, then stop). Expert improvesthe argument for example. Return to step 3. Learn a hypothesis with ABML. Find the most critical example. (if none found, stop procedure) Expert explains the example. Argument is added to the example. Return to step 1. What if expert’s explanation is not good enough?

  12. Knowledge Acquisition ofChess Concepts used in a Chess Tutoring Application Case Study: Bad Bishop

  13. The Concept of the Bad Bishop • Chess experts in general understand the concept of bad bishop. • Preciseformalisation of this concept is difficult. Traditional definition (John Watson, Secrets of Modern Chess Strategy, 1999) • A bishop that is on the same colour of squares as its own pawns is bad: • its mobility is restricted by its own pawns, • it does not defend the squares in front of these pawns. • Moreover, centralisation of these pawns is the main factor in deciding whether the bishop is bad or not.

  14. Data set • Data set: 200 middlegamepositionsfrom real chessgames • Chess experts’ evaluation of bishops: • bad: 78 bishops • not bad: 122 bishops GM Garry Kasparov FM Matej Guid wGM Jana Krivec • We randomly selected: • 100 positions forlearning • 100 positions for testing • CRAFTY’s positional feature valuesserved asattribute values for learning.

  15. Standard Machine Learning Methods' Performance with CRAFTY's features only • Machine learning methods’ performance on initial dataset • The results were obtained on test data set. • The results obtained with CRAFTY’s positional features only aretoo inaccurate for commenting purposes… • additional information for describing bad bishops is necessary.

  16. First Critical Example • Rules obtained by ABML method ABCN2failed to classify this exampleas "not bad" • The following question was given to the experts: “Why is the black bishop not bad?“ • The experts used their domain knowledge: “The black bishop is not bad, since its mobility is not seriously restricted by the pawns of both players.”

  17. Introducing new attributes into the domain and adding arguments to an example • Experts’ explanation could not be described with current domain attributes. • A new attribute,IMPROVED_BISHOP_MOBILITY, was included into the domain: • the number of squares accessible to the bishop, taking into account only own and opponent’s pawn structure • The argument “BISHOP=“not bad” because IMPROVED_BISHOP_MOBILITY is high“ was added to the example.

  18. Counter example • Methodfailed to explain critical example with given argument. • Counter example was presented to experts: Critical example: “not bad”, IMPROVED_BISHOP_MOBILITY is high. Counter example: “bad”, although IMPROVED_BISHOP_MOBILITY is high. • "Why is the “red” bishop bad, comparing to the “green”one?" • Experts’ explanation: “There are many pawns on the same colour of squares as the black bishop, and some of these pawns occupy the central squares.”

  19. Improving Arguments with Counter Examples • AttributeBAD_PAWNS was included into the domain. • This attribute evaluates pawns that are on the colour of the square of the bishop ("bad" pawns in this sense). The argument given to the critical example was extended to“BISHOP=“not bad” becauseIMPROVED_BISHOP_MOBILITY is high andBAD_PAWNS is low” • With this argument the method could not find any counter examples anymore. • New rule: ifIMPROVED_BISHOP_MOBILITY ≥4 andBAD_PAWNS≤ 32 thenBISHOP= “not bad” class distribution [0,39]

  20. Assesing “bad” pawns • The experts designed a look-up table (left) with predefined values for the pawns that are on the color of the square of the bishop in order to assign weights to such pawns. BAD_PAWNS_AHEAD = 16 + 24 + 2 = 42

  21. After the Final Iteration... • The whole process consisted of 8 iterations. • 7 argumentswereattached to automaticallyselectedcriticalexamples • 5 new attributeswere included into the domain

  22. Classification Accuracy Through Iterations Results on the final dataset

  23. Classification Accuracy Through Iterations • The accuracies of all methods improvedby adding new attributes. • ABCN2(which alsoused the arguments)outperformed all others. Arguments suggested useful attributes AND lead to even more accurate models.

  24. Advantages of ABML for Knowledge Acquisition easier for experts to articulate knowledge explain single example more knowledge from experts expert provide only relevant knowledge critical examples time of experts' involvent is decreased

  25. Advantages of ABML for Knowledge Acquisition detect deficiencies in expert's explanations counter examples even more knowledge from experts hypotheses are consistent with expert knowledge arguments constrain learning hypotheses comprehensible to expert more accurate hypotheses

  26. Conclusions ABML-based Knowledge Acquisition process provides: • more knowledge from experts • time of experts' involvent is decreased • hypotheses comprehensible to expert • more accurate hypotheses Argument Based Machine Learning enables better knowledge acquisition

More Related