1 / 13

Cognitive Modelling – An exemplar-based context model

Cognitive Modelling – An exemplar-based context model. Benjamin Moloney Student No: 09131175. U is the set of all examples of all categories. Context theory .

zahi
Download Presentation

Cognitive Modelling – An exemplar-based context model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cognitive Modelling – An exemplar-based context model Benjamin Moloney Student No: 09131175

  2. U is the set of all examples of all categories Context theory When classifying an item in a category C, its degree of membership is equal to the sum of its similarity to all examples of that category, divided by its summed similarity to all examples of all categories.

  3. Context theory – Exemplar Model The exemplar model uses a multiplicative similarity computation: compare the item’s values on each dimension. If the values on a given dimension are the same, mark a 1 for that dimension. If the values on a given dimension are different, assign a parameter sfor that dimension. Multiply the marked values for all dimensions to compute the overall similarity of the two items. The following example shows how the exemplar model could be applied to an experiment on how people classified artificial items (described on three dimensions) in 3 previously-learned artificial categories. The participants were given a number of training items from which they learned to identify diseases, and then had their knowledge tested by being asked to identify new items as a member of 6 different categories (A, B, C, and their conjunctions).

  4. S2=0.5 S3=0.3 S1=0.2 0.20+0.03+0.06 0.20+0.03+0.06+1.00+0.10 Context theory – An example Set of category items: < D1:A, D2:A, D3:B > Disease A < D1:A, D2:B, D3:A > Disease A < D1:B, D2:A, D3:A > Disease A < D1:C, D2:A, D3:B > Disease B < D1:B, D2:C, D3:C > Disease B Classifying new item <D1:C,D2:A,D3:B> in A: <D1:C,D2:A,D3:B> = 0.20 = 0.2 * 1.0 * 1.0 <D1:A, D2:A, D3:B> < C, A, B > = 0.03 < A, B, A > = 0.2 * 0.5 * 0.3 < C, A, B > = 0.06 < B, A, A > = 0.2 * 1.0 * 0.3 < C, A, B > = 1.00 < C, A, B > = 1.0 * 1.0 * 1.0 < C, A, B > = 0.10 < B, C, C > = 0.2 * 0.5 * 1.0 Membership(<CAB>,A) = = 0.21

  5. My Model It is this exemplar-based model that formed the basis of my own model. I attempted to model the results of the previous experiment, in which 16 training items were used to learn to identify 5 new test items. The participants rated each new test item as a member of a different category (category A, category B, and category C) or a different conjunction (conjunctions A&B, A&C, and B&C). Before constructing the model based on the exemplar approach I identified several characteristics that could potentially be important in it’s structure. They were: • The high frequency of certain features in several disease categories • The basis for the classification of an item in conjunctive categories • The attention parameters (s1, s2, s3)

  6. Feature frequency per category I wanted to create a model based partially on the identification of certain features having a notably high frequency per dimension in several disease categories. For example, of the four instances of feature A appearing in dimension 1 of the training items, three appear in category A and one appears in the conjunctive category A & B. It was decided that any test item with a feature A in dimension 1 would be compensated when being identified as category A, despite the feature not matching that particular training item. That is, despite the dimension 1 feature of training item 4 being Y, a test item with the dimension 1 feature A would be weighted using a new measure (3.5/4 = 0.875) instead of the lower attention parameter.

  7. Justification The reasoning behind this decision was to mimic how the participants identified patterns in the training item features and how they are distributed from category to category. The weight value 3.5/4 = 0.875 was based on the three category A instances plus the instance in the conjunctive (assigned a relative value 0.5). Similarly, high feature frequency was identified in category B (feature B is given weight values of 2.5/3 = 0.833 and 4.5/6 = 0.75 for dimensions 2 and 3 respectively) and category C (feature C appears only in dimension 3 for this category and so is given a weight value of 1 to reflect its importance, and is designated 4/5 = 0.8 for dimension 3). The impact this has on the model varies from almost irrelevant to subtly influential; changing first weight value from a 0.875 to a 0.1 has no tangible affect, while changing the last weight value from a 0.8 to a 0.5, while still higher than the attention parameter, causes a change in predictions of two of the test items.

  8. Classification of an item in conjunctive categories My next major decision was how to give each test item a score as a member of each conjunction of categories. It did not seem adequate to compute an items membership in a conjunction by simply combining that item’s computed membership in the first category and its computed membership in the second, as it treated these computed memberships as independent of each other. I wanted to create a method of calculating a conjunctive membership score that took into account a test item’s non-membership of a category. To do this I used the formula:

  9. Justification This formula awards a high conjunctive category score to those test items that score high in two singular categories which are also nearly equivalent. A combination of this and scoring low in the remaining singular category influences how a test item’s membership of a conjunctive category is calculated. By doing this I hoped to adequately reflect the human reasoning process about conjunctions, and how it classifies any test item deemed to be a member of such a conjunctive category.

  10. Attention Parameters Finally, suitable attention parameters had to be chosen. I reasoned that relatively low values should be chosen, since it would balance out the ‘reward’ a test item received for possessing a highly frequent feature. Initially these values were set trivially (s1 = 0.2, s2 = 0.2, s3 = 0.2). I decided that a useful method of gauging optimal values for these parameters was to identify the maximum difference between the participants’ mean membership scores and the model’s equivalent computed membership scores (the average difference could also be used). Then through a process of trial and error I altered the attention parameters so that this difference would be minimised. I eventually arrived at the values of (s1 = 0.2, s2 = 0.4, s3 = 0.35). The final table of scores attained using the model is as follows:

  11. Table of Results

  12. Correlation Correlation Score = 0.91

  13. Performance and Assessment As can be seen, the model’s predictions correlates well with those chosen by the participants in the experiment. However, the model does fail to show suitable robustness, and flaws can be identified when it is held up to any moderate level of scrutiny. For example, the predicted category scores are extremely sensitive to changes in the attention parameters (changing s3 from 0.35 to 0.36, for instance, changes test item 4’s designation from C to B). It can be argued from this that the model is tailored to fit the data, and may not produce similarly accurate results when given a new set of test items to predict.

More Related