AI – Week 23 – TERM 2
This presentation is the property of its rightful owner.
Sponsored Links
1 / 13

AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing PowerPoint PPT Presentation


  • 58 Views
  • Uploaded on
  • Presentation posted in: General

AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing. Lee McCluskey, room 3/10 Email [email protected] http://scom.hud.ac.uk/scomtlm/cha2555/. Term 2: Draft Schedule for Semester 2. 13 - Introduction to Machine Learning

Download Presentation

AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Ai week 23 term 2 machine learning and natural language processing

AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing

Lee McCluskey, room 3/10

Email [email protected]

http://scom.hud.ac.uk/scomtlm/cha2555/


Ai week 23 term 2 machine learning and natural language processing

Term 2: Draft Schedule for Semester 2

13 - Introduction to Machine Learning

14 – Machine Learning - Knowledge Discovery / Data Mining 1

15 - Machine Learning - Knowledge Discovery / Data Mining 2

16 - Machine Learning of Planning Knowledge - 1

17 - Machine Learning of Planning Knowledge - 2

Reading Week

19 - Machine Learning - Reinforcement Learning

20 – Machine Learning – Neural Networks

21 – Natural Language Processing 1

22 - Natural Language Processing 2

Easter Break

23 - Natural Language Processing 3

24 - REVISION

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Learning - DEFINITIONs

Learning is fundamental to Intelligent behaviour

  • Learning is loosely defined as a “change in behaviour”.

  • Wikipedia has it as “acquiring new, or modifying existing, knowledge, behaviours, skills, values, or preferences and may involve synthesizing different types of information.”

  • There are taxonomies of learning, and various ways that learning has been utilised by machines.

  • Question: How could “Machine Learning” be applied to Computer Games?

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Types of Learning

Learning by ROTE - most simple type of learning?

This is purely storing and remembering “facts” eg memorising a telephone directory, or arithmetic tables (“times tables”)

Store and retrieve –

  • No processing needed on the inputs

  • No recognition of the “meaning” of inputs

  • No integration of learned knowledge with other knowledge.

    AI Example: A program that stores a game board, and the next best move. When an identical game board was seen in the future, then the best move could be retrieved.

    A program that increases a database of facts could be considered to be learning by rote.

    Learning by BEING TOLD (programmed)

    this is storing and remembering, but implies some kind of understanding / integration of what is being told, with previous knowledge.

  • Not just facts, but procedures or plans

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Types of Learning

Learning by EXAMPLE (trained/taught)

this involves a benevolent teacher who gives classified examples to the leaner. The learner performs some generalisation the examples to infer new knowledge. Previous knowledge maybe used to steer the generalisations. In analogy the learner performs the generalisation based on some previously learnt situation.

Learning by ANALOGY

this invovles a benevolent teacher who gives classified examples to the leaner. The learner performs some generalisation the examples to infer new knowledge. Previous knowledge maybe used to steer the generalisations. In analogy the learner performs the generalisation based on some previously learnt situation.

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Types of Learning

Learning by OBSERVATION (self-taught)

this is similar to the category to Learning from Examples but without classification by teacher - the learner uses pre-learned information to help classify observation (eg “conceptual clustering”)

Learning by DISCOVERY

this is the highest level of learning covering invention etc and is composed of some of the other types below

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Types of Learning

Our CLASSIFICATION is based loosely on the “amount of autonomy” or processing required to effect a change of behaviour in an agent.

by discovery

by observation

by analogy

by example increase in autonomy or processing required

by the learner

by being told

by rote

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Another Way to Categorise Learning

TWO ASPECTS OF LEARNING:

A: KNOWLEDGE/SKILL ACQUISITION

  • Inputting NEW knowledge or procedures

  • For example, learning the rules of a new game

    B: KNOWLEDGE/SKILL REFINEMENT

  • Changing/integrating old knowledge to create better (operational) knowledge (Inputs no or little new knowledge)

  • Learning heuristics (improve search)

  • For example, getting skillful at a new game

    Acquisition and Refinement combine in obvious ways EG using examples (new knowledge) to bias refining skills (old knowledge) in the areas covering the examples

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

“Concrete” Example of Machine Learning: Learning Macros

  • Imagine you have just solved a problem, how or what can you *learn* form the process ?

  • One way: learn the (minimal) characteristics of the Situation when you can apply the solution again.

  • For example, the solution may be a plan. Under what conditions can we use this plan again?

    PROCESS: A planner solves a problem and induces one or more macros from the solution sequence by “compiling” (part of) the operator sequence into one macro.

    1. Find a solution T = (o(1),..,o(N)) to a goal G from initial state I

    2. Form a Macro- Operator (macro) based on:

    Pre-condition: WP = Weakest Precondition(T, G)

    Post-condition: G

    3. In the future, if G is to be achieved and WP is true in the current state, apply T.

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Learning Macros – Rocket Example

Operator Schema:

move(R,A,B)

pre: at(R,A), A \= B

eff+: at(R,B) eff-: at(R,A),

load(R,C,L)

pre: at(R,L), at(C,L)

eff+: in(C,R) eff-: at(C,L)

unload(R,C,L)

pre: at(R,L), in(C,R)

eff+: at(C,L), eff-: in(C,R)

NB the fact that the operators are represented DECLARATIVELY makes it easier for processes to reason with them (not just Planning but Learning also).

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Learning Macros

  • WP ( action, goal) = pre(action) U {goal \ eff+(action) }

    e.g. WP(unload(r,c1,paris), {at(c1,paris), at(c2,paris)} ) =

    {at(r,paris), in(c1,r), at(c2,paris) }

  • EXAMPLE: Now try WP(T,G) when…

    at(r, london), at(c1,london), at(c2,london) Initial State

    at(c1,paris) Goal

    Solution: load(r,c1,london), move(r, london.paris), unload(r, c1,paris)

    Macro is OP with precondition WP(T,G) and eff+ containing G

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Applying Macros

  • Learned Macro is (WP(T,P), G, T)

  • We can further GENERALISE Macro (WP(T,P), G, T) by changing constants to variables, because operators are NOT dependent on instances of variables.

    APPLICATION:

  • In state space search: if a state is encountered which contains WP(T,G), where we want to achieve a G’ which contains G, then APPLY T.

  • NB: Macros can speed up solutions, but can also cause MORE SEARCH if not used wisely. So we have to be careful what we learn!

School of Computing and Engineering


Ai week 23 term 2 machine learning and natural language processing

Conclusion

  • There are various types of Learning manifest in nature and in AI

  • Two important roles for Learning are in Knowledge Acquisition and Knowledge Refinement

  • Macro acquisition is a form of KR for Problem Solving /Planning where procedures are learned to make plan generation more efficient. This can be done by working out the weakest precondition of a “solution” and storing it with the goal achieved.

School of Computing and Engineering


  • Login