1 / 25

Congifurable, Incremental and Re-structurable Contributive Learning Environments

Congifurable, Incremental and Re-structurable Contributive Learning Environments. Dr Kinshuk Information Systems Department Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6 350 5799 Ext 2090 Fax: +64 6 350 5725 Email: kinshuk@massey.ac.nz

noelle
Download Presentation

Congifurable, Incremental and Re-structurable Contributive Learning Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congifurable, Incremental and Re-structurable Contributive Learning Environments Dr Kinshuk Information Systems Department Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6 350 5799 Ext 2090 Fax: +64 6 350 5725 Email: kinshuk@massey.ac.nz URL: http://fims-www.massey.ac.nz/~kinshuk/

  2. Reusability • Benefits are widely known • However, early promises of time and cost savings hae not materialised • In software reuse, only trivial pieces of code can be used in another context without much effort

  3. CIRCLE Architecture • Only way to increase usability and in the process automatically increase the reusability, is to allow: • implementing teacher to contributethrough: • configuring the learning space • Incrementally adding and re-structuring • scope and functionality of IES components • Early adoption: HyperITS

  4. HyperITS • No pre-defined sequence of operations • Concepts linked in an interrelationship network • Inconsistency results in graded feedback leading the learner gradually to the point of start • Mis-conceptions and missing conceptions are identified.

  5. HyperITS • Emphasis on cognitive skills development • Uses cognitive apprenticeship approach to provide cognitive skills: • Observation • Imitation • Dynamic feedback by learning • Interpretation of data • Static feedback from testing

  6. HyperITS • Granular design • Domain concepts are acquired in the context of their inter-relaed concepts • Interfaces are brought up to give: • Another perspective on the data set • Fine grained interface to give details of a coarse grained presentation • Fine grained basic application to revise steps at a more advanced level

  7. HyperITS • Process modelling • Overcoming the shortcomings of overlay model • Understanding learner’s mental processes • Allows finding optimal and sub-optimal paths in learning process

  8. HyperITS Architecture

  9. Knowledge representation

  10. Knowledge repres. framework

  11. Domain Layer • Static domain content provided by the designing teacher: • Concepts, the smallest learning units • Relationships among concepts • Priorities associated with the relationships • Custom operator definitions • Constraints on backward chaining, if desired

  12. Teacher Model Layer • Consists of the pedagogy base reflecting various tutoring strategies and scaffolding provided by the implementing teacher • Optional problem bank created by the implementing teacher to situate the concepts in a particular context • The teacher can also provide additional diverse contexts

  13. Contextual Layer • Contains the current goals and structural information of current tasks: • system’s solution to current problem; • system’s problem solving approach; • immediate goals. • This information is dynamically updated along with the learner’s progress in problem solving.

  14. Initialization functionality • Domain representation initialiser • initialises the system according to the current learning goal for all types of problems. • Random problem generator • randomly selects concepts to treat as independents and creates their instances by randomly generating values within specified boundaries.

  15. Initialization functionality • Prediction boundary initialiser • initialises the boundaries for the overlay model (how far student’s solution can go from expert solution). • These boundaries are used later to evaluate a learner’s action.

  16. If independent variable introduced • Contextual dependency finder • identifies the dependent concepts that can be derived within in the current state of the problem space. • Dependency activator (client side) • activates the instances of the contextually dependent concepts and invokes the dependency calculator at server to update their current status in the expert solution.

  17. If independent variable introduced • Dependency calculator (server side) • provides values for the dependent concepts based on domain layer and pedagogy base to update the expert solution. • This functionality allows a learner to adopt a different route to the solution than the one currently adopted by the system.

  18. Setting validation bounds for dependent variables • Prediction boundary updater • updates the prediction boundaries used in comparing a learner’s solution with the expert solution. The updater fine-tunes the system’s initial prediction boundaries to match the route to solution adopted by a learner.

  19. Validation of learner’s input to dependent variables • Discrepancy evaluator • evaluates the validity of a learner’s attempt by matching it with the expert solution within the prediction boundaries.

  20. Validation of learner’s input to dependent variables • Dynamic feedback generator • provides context-based feedback to the learner. The messages are generated dynamically to improve semantics and to prevent monotony. • Granular approach is used in identifying the source of error and for providing feedback.

  21. Validation of learner’s input to dependent variables • Dynamic feedback generator • i.Basic misconceptions, where the learner fails to derive a variable due to misconceptions about the critical concepts. In such cases, graded scaffolding is used: • ask the learner to try again; • suggests the relationship to be used; • provides the calculation data; • shows the full calculation, and allows the learner to proceed.

  22. Validation of learner’s input to dependent variables • Dynamic feedback generator • ii. Missing conceptions, when learner unsuccessfully tries to derive a variable that requires derivation of intermediate variables  the error arising from missing knowledge about intermediate relationships. • System suggests learner to derive the intermediate concept first.

  23. Validation of learner’s input to dependent variables • Dynamic feedback generator • iii. If learner unsuccessfully tries to derive some complex concepts, system advises the learner to use a finer grain interface. • The finer grain interface deconstructs the complex concept into components to capture the misconceptions at a fine grain level.

  24. Evaluating learner’s process of deriving solution • Local optimiser • identifies the possible relationships and determines the best relationship to use based on the priorities specified in the domain layer. • It allows the system to identify any sub-optimal approach adopted by the learner.

  25. Finally.. Adequate technologies are rapidly emerging that can be harnessed for deploying the CIRCLE Architecture. For example: Distributed Component Object Model (DCOM) for Microsoft development tools or Remote Method Invocation (RMI) for Java

More Related