KNOWLEDGE REPRESENTATION. Classical cognitive science and Artificial Intelligence relied on the idea of “knowledge representation”. The Representational-Computational theory of mind. Knowledge consists of mental representations (mental symbols).
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
1) Symbols or data structures:
2) Algorithms (step-by-step procedures to operate on those structures).
For instance, a procedure may “reverse” the order of elements in a list.
Data structuresMental representations
“The fundamental working hypothesis of AI is that intelligent behavior can be precisely described as symbol manipulation and can be modeled with the symbol processing capabilities of the computer.”
Robert S. Engelmore and Edward Feigenbaum
Different theories have different views about how the mind represents knowledge.
The “symbols” could include:
CONCEPTUAL SYMBOLS (frames, scripts).
P —> Q
PQE —> (QQF <—> P)))) ((
Not all expressions are well-formed.
Many expressions in the previous page were not well-formed.
A well-formed formula (wff) is an expression that follows certain rules:
2. If we add the negation symbol ~ to any well-formed expression, the result is a also well-formed.
Note: Since ~P is well-formed, it follows from rule 2 that ~~P is also well-formed.
Note: ~ goes together with only one expression.
~PQ is not well-formed.
3. Given two well-formed expressions, the result of connecting them by means of &, —>, <—>, or V is also a well-formed formula.
Example: P, Q, and ~Q are all well formed, so the following are also well-formed:
P & ~Q
P <—> Q
P —> Q
Note: &, —>, <—>, and V must always go together with two wwfs. Otherwise, the expression is not well-formed.
Sample expressions that are not well-formed:
4. No other expression is well-formed.
A & B
~P —> Q
~ (A & B)
A V —>
~(A V B)
(~A & ~B)
Exercise--Translate the following into the propositional calculus:
Maggie is smiling but Zoe is not smiling
If Zoe does not smile, then Janice will not be happy
Maggie’s smiling is necessary to make Janice happy.
If Maggie smiles although Janice is not happy, then Zoe will smile.
Use the following translation scheme:
A: Maggie is smiling
B: Zoe is smiling
C: Janice is happy
A & ~B
~B —> ~C
A —> C
If Maggie smiles although Janice is not happy, then Zoe will smile.
(A & ~C) —> B
If A is true, then ~A is false.
If A is false, then ~A is true.
A & B is only true if both A and B are true. Otherwise, it is false.
ABA V B
A V B is false if both A and B are false. Otherwise, it is true.
A B A —> B
A —> B is true, except when A is true and B is false.
ABA <—> B
A <—> B is only true if A and B are both true or both false.
Construct a TT for the expression
(P —> Q) V (~Q & R)
First write down all the possible combinations for P, Q, and R.
Construct first the table for P —> Q, then for ~Q, then for ~Q & R.
Now you can do a TT for the whole formula!
To prove this, please construct a truth table, and you will see that for every value of P and Q the whole formula comes out true!
P & ~P
(P <—> Q) —> (P V ~R)
…the philosopher Ludwig Wittgenstein in his book Tractatus Logico-Philosophicus.
The Modus Ponens
P → Q
The Modus Tollens
P → Q
Therefore ~ Q
P & Q
Given the proposition P, we can conclude all of the following:
P V Q
(R –> T) V P
R&F V P
P –> Q
(P –> Q) V (~P & F)
(P –> Q) V T
P V Q
~P V Q
An interpretation of this example:
If it is raining, I will bring an umbrella.
It is raining.
I will bring an umbrella.
P ↔ Q
Q ↔ P
P V ~R
There are many very simple and important examples of logical reasoning that cannot be expressed in the propositional calculus:
All human beings are animals;
Bryan is a human being,
therefore Bryan is an animal.
All positive integers are divisible by themselves.
2 is an integer,
therefore 2 is divisible by itself.
Every sentence in this type of argument is different from every other sentence.
Different sentences are represented by different sentence letters (for instance, A, B, and C).
We cannot represent the similarities between the various sentences, because we take the whole sentence as a unit.
Propositional logic cannot show how it is possible to conclude the last sentence from the first.
…show how different sentences share the same parts.
How to express the similarity between:
We can represent them as follows:
The sentence “Hector is Spanish” can be written as
where a = Hector and F = is Spanish.
The letter “a” represents “Hector”.
The letter is a name for the subject.
F represents the predicate.
Hector is Spanish
Picasso is Spanish
One sentence can be represented as Fa (Hector is Spanish) and the other as Fb (Picasso is Spanish).
“are animals”, “is Spanish”, and “is smiling” are incomplete expressions.
All people are rational animals.
All people will die some day.
a, b, c, d, a1, b1 , c1,, d1 , a2, …
Names represent individuals, like “Tim” or “this chair”
u, v, w, x, y, z, u1, v1 , w1 , …
3. Predicate letters:
A, B, C,… Z, A1, …. Z1, A2, …
4. The identity symbol ‘=‘
a) The universal quantifier x
The universal quantifier corresponds to “every” or “all”.
b) The existential quantifier x
The existential quantifier corresponds to “some”, which means “at least one”.
A quantifier must always be followed by a variable (never a name).
6. All the elements of the propositional calculus: sentence letters, connectives, and parentheses.
The Predicate Calculus is an extension of the propositional calculus.
It includes the same elements plus several new ones (names, variables, predicate letters, and the identity symbol).
For convenience, we can also introduce the symbol as an abbreviation.
a b really means ~a = b.
The symbol is not really part of the basic vocabulary of the Predicate Calculus.
1. All the rules of the propositional calculus also apply to the Predicate Calculus.
2. A predicate letter followed by one or more names is well-formed
3. Expressions of the form a = b (identity of names) are wff.
Strictly speaking, identity is a kind of predicate.
The proper way of writing this should be
For historical reasons, however, it is written a=b.
Fx is not well-formed.
5. Nothing else is well-formed.
Are the following wff?
(Hint: There is only one wff here!)
Translate the following sentences:
x(Sx —> Cx)
2. Some Chinese people live in Hong Kong
x(Cx & Hx)
3. Not all Chinese people live in Hong Kong
Alternative form: x(Cx & ~Hx)
4. Only Spanish people live in Madrid
(M = Live in Madrid; S = is Spanish)
The formula x(Sx—>Mx) is an incorrect translation!
Why is it incorrect?
5. No people study in City University unless they are stupid.
(P = are people; U =study in CityU; S = are stupid)
x(Px —> (~Ux V Sx))
Another form: x((Px & UX) —> Sx)
How to indicate that there are exactly n things?
For instance, exactly one, or exactly two, etc.
xy x=y Exactly one
xy (xy & z (z=x V z=y) Exactly two
xyz (((xy & xz) & yz
& w ((w=x V w=y) V (w=z)) Exactly three
The identity symbol = ,
Take the formula Fa & Ga.
Take the formula Fa & Ga.
The reverse of existentialization and universalization is instantiation.
The formula Fa & Ga is an instance of the formula x (Fx & Gx) and of the formula x (Fx & Gx).
Here is a simple chain of reasoning:
Fa –> Ga
x (Fx & Gx)
x (Gx & ~ Fx)
x(Gx –> Hx)
x (Hx & ~ Fx)
x (Gx & ~ Fx)
x(Gx –> Hx)
Ga –> Ha
Ga & ~Fa
Ha & ~Fa
x (Hx & ~ Fx)
x ~Fx can be turned into:
~ x Fx can be turned into:
Another example of Leibniz’s Law:
Fa & Ga
Fb & Ga
Fb & Gb
Fa & Gb
Take this as an introductory overview…
The four cards above appear on the page.
A rule states:
If one side has a word, then the other has an even number
(In short: if word, then even)
The rule doesn't say anything about odd numbers.
But if the 7 card has a word on the other side, then the rule would be refuted.
So it is necessary to turn over the 7.
To understand this, people need to know the modus tollens.
The problem of consistency.
Most logicians insist that systems should be consistent.
This requirement is too strong.
Human thinking is often inconsistent.
A father and his son are involved in a car accident, as a result of which the son (but not the father) is rushed to hospital for emergency surgery.
The surgeon looks at him and says "I can't operate on him, he's my son".
Suppose that we define a world.
A state of this world will include:
This is known as theframe problem.
The frame problem arises especially when a goal requires a sequence of events or actions.
The following problems can occur:
Example: The concept of a “Western”.
A kind of fiction
Normally takes place in the US
Set around or after the US Civil War
Characters: sheriff, cavalry officer, farmer, cattle driver, bounty hunter.
Locations: small town, fort, saloon, stagecoach
Objects: guns, horses, roulettes, etc.
Typical situations: gunfights, burning a farm or a fort, stampede, driving cattle, etc.
Film Examples: Unforgiven, Rio Bravo, Fort Apache, My Darling Clementine.
Cognition does not consist of step-by-step logical deductions.
It is the application of a whole frame to a particular situation.
Information is often structured into “parts” and “wholes”.
The act of looking at a cube, for instance, involves a structure like this:
When we move around a cube, one or more faces may go out of view, the whole shape of the cube may change, etc.
Thus we have a series of view-frames.
Minsky hypothesized that frames are stored in long-term memory with default terminal values.
For instance, if I say “Peter is on the chair” you probably do not think of an abstract chair. You perhaps imagine a particular chair with a shape, color, etc.
These characteristics are default assignments to the terminals of frame systems.
1. We can replace our original frame choice with another frame.
2. We can find an excuse or an explanation.
“It is an experimental movie”.
“It is broken or poorly designed”.
“It is not finished”.
“It is not a real door but a toy”.
3. When trying to replace a frame, we can use advice from a similarity network.
This network represents similarities and differences between related concepts.
A box, unlike a table, has no room for knees; the box is similar to a chair because one can sit on it, etc.
If something is not a chair, perhaps it is a box!
In this way, similarity networks can help us to replace our original frame with a more appropriate frame.
Frames are thus open to change.
Frame modification or replacement resembles the scientific process:
The basic ingredients of intelligence are typically structured into chunks of some sort.
These structures are open to revision.
The use of changeable structures accounts for the power and efficiency of human thinking.
Films can encourage us:
A schema is roughly the same as a frame:
“A schema is a knowledge structure characteristic of a concept or category. “ (David Bordwell)
Rules can represent many sorts of knowledge about (for example):
Concepts and their relations.
“If x has four legs, wags its tail, and barks, then x is a dog.”
IF x is a dog, THEN x is a mammal.
Causes and effects in the world
If x is kicked, then x will move.
Goals or tasks
IF you want to obtain a better job, THEN you should get a degree from a good university.
IF you do not attend the exam, THEN you will fail the subject.
….to find a sequence of rules that gives a path from the starting state to the goal state.
These rules are a plan or strategy for action.
Work in classical AI often described intelligence in terms of planning.
A complete interaction with the car repair system might be:
System: Is it true that there's petrol in the fuel tank?
System: Is it true that the engine turns over?
System Is it true that the lights come on?
System: I conclude that there is a problem with battery.
The following is an English version of one of Mycin's rules:
PROBLEMS WITH EXPERT SYSTEMS
Cognitive science and AI researchers often believe that:
(This list may involve rules, frames, and/or scripts)
Do these two beliefs make sense?
Some critics complain that classical AI overemphasizes rules and planning.
What is planning?
“No amount of anticipation, planning, and programming can ever enumerate, a priori, all the variants of even a routine situation that may occur in daily life.”
George N. Reeke and Gerald Edelman
“In the real world any system of rules has to be incomplete.”
…but plans do not control every aspect of an action.
…But only because the action has somehow run into problems.
AI without knowledge representation
Think of an ant colony.
SUMBSUMPTION ARCHITECTURE (developed by Rodney Brooks, MIT)
Example of: distributed cognition, cognition-in-practice, importance of the ecological niche, emergence and self-organization.
How to design a cognitive structure (“mind”) that would control the behavior of the robot.
Brooks decided to get rid of symbolic cognition.
No symbolic representation of the world inside the robot.
Nothing but sense and action.
“Seeing, walking, navigating, and aesthetically judging do not usually take explicit thought, or chains of thought… They just happen.” (Brooks)
In insects and other lower animals, sensation and actuation are closely linked, without the intermediary of some internal symbolic representation of the world (by rules, etc.)
Basic skills are based mainly on the unthinking coordination of perception and action.
For instance, a robot might have three layers:
1. One layer might ensure that robots avoid colliding with objects.
2. Another layer would make the robot move around without a fixed goal.
3. A third layer would make it move towards some object sensed in the distance.
The behavior follows from the interaction between organisms and environment.
It is not controlled only by the internal rules of the organism.
The eukaryotic cell is made of various biochemical components (nucleic acids, proteins, etc.), and is organized into bounded structures (the cell nucleus, the organelles, the cell membrane, etc.)
These structures, thanks to the external flow of molecules and energy, produce the components which continue to maintain the organized structure of the cell.
It is the structure of the cell that gives rise to these components.
It is these components that reproduce the cell.
The biological cell therefore produces itself.
An autopoietic system is to be contrasted with an allopoietic system.
A car factory uses raw materials (components) to produce a car (an organized structure) which is something other than itself (a factory).
The car does not reproduce itself.
"An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which:
(ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network."
Maturana and Varela
The concept of neural networks was already anticipated in the work of the cybernetics movement in the late 40s and 50s.
In the 60s, however, classical AI became the dominant force. Neural networks were pushed aside…
1. A set of processing units
2. A pattern of connectivity among these units
3. An input connection is a conduit through which a member of a network receives information (INPUT).
4. An output connection is a conduit through which a member of a network sends information (OUTPUT).
No computer belongs to a network unless it can receive information (INPUT) from other computers or send information (OUTPUT) to other computers.
The execution of particular tasks is often distributed over several brain regions. Functions are not always localized in a specific physical area of the brain.
Brain activity is not serial but vastly parallel.
1. Suppose we want a net to carry out some task (such as recognizing male and female faces in a picture).
2. The net might have two output units (indicating the “male” and “female”) and many input units, one devoted to the brightness of each pixel in the picture.
3. The weights of the net to be trained are initially set to random values.
4. The net is then “shown” some picture(s).
5. The actual output of the net is compared with the desired output.
6. Every weight in the net is modified slightly to bring the net's actual output values closer to the desired output values.
7. The process is repeated until the desired output values are produced at the appropriate times.
8. The ideal objective is to let the net “generalize” its behavior, so as to “recognize” even male and female faces it has never “seen” before.
To “recognize” something is to send an appropriate output when confronted with that something.
Its acquired ability is an emergent property or characteristic of the network.
Some tasks that neural networks can do:
How does the brain represent information?
Note: A psychologist or computer science who works with neural networks does not necessarily have to support Eliminative Materialism! Many of them do not!
…folk psychology will be completely displaced by a true theory of the brain.