University College Dublin DEPARTMENT OF COMPUTER SCIENCE Multi-Agent Systems(MAS) & Distributed Artificial Intelligence(DAI) CS 4.19 G.M.P. O'Hare
The Turing Test Allan Turing  in his classic paper ‘Computing Machinery and Intelligence’, circumvented the problem of defining artificial intelligence. Such a test took for form of a game. The game he describes has three participants, an interrogator, a human and a machine. The interrogator is physically removed from the other twoparticipants. He can communicate with each of them by way of a teletype, he does not however, know which participant is machine and which is human. His task is to establish which one is the machine and which is the human. This became renowned as the ‘Turing Test’. A computer could be thought to display intelligence if the interrogator could not distinguish between man and computer.
The Turing Test II Turing’s work did not, however, win universal acceptance. More recently opponents like Millar  while recognising the merits of his work highlights the fact that it does not yield any insight into the various skills which constitute intelligence. He believed this to be of great significance if any realistic attempt is to be made at constructing a truly intelligent machine. If I may paraphrase Leonardo de Vinci (1452-1519), he in a similar vein suggested that......... “when man understands thenatural flight of the bird, man will be able to build a flying machine.”
A Working Definition So with artificial intelligence, the definition we shall employ is that volunteered by Marvin Minsky  “Artificial intelligence is the science of making machines do things that would require intelligence if done by man.”
A Simple Example F 7 E 4 "If there is a vowel on one side of a card then there will be an even number on the other" How many cards must you turn over in order to test the validity of this statement?
Another Simple Example 1 8 6 4 5 9 2 7 3 Players alternatively choose a card until they select three cards in total The Object of the Game is to obtain a total of 15 and ensuring your opponentdoes not acquire a total of 15. What strategy would you adopt?
The History of AI 1 The term Artificial Intelligence is normally attributed to John McCarthy. In 1956 he organised a conference which was to enable researchers in the field to share expertise. As a consequence of his actions the discipline of AI was founded. Some attendees namely, Allan Newell, Herbert Simon and Marvin Minsky himself, are now without question the leading researchers in the field.
The History of AI 2 At the conference Newell & Simon detailed work on the theorem prover Logic Theorist which had been performed at Carnegie. This is commonly regarded as the first AI program as such. The Logic Theorist was written in IPL (Information Processing Language) the first language which permitted computers to process concepts as opposed to numerical quantities.
The History of AI 3 Minsky & McCarthy founded the MIT AI Laboratory. McCarthy is renowned as the inventor of LISP while Minsky proposed the Frame concept for Knowledge Representation. In this early stage efforts tended to concentrate on: Game Playing: equipping a computer to play a particular game. Theorem Proving: equipping a computer to show that some statement follows logically from a set of known truths called axioms.
The History of AI 4 Early efforts employed a technique known as State Space Search involving essentially several components ... (a) an initial stage (b) a final state (c) an ability to detect final state (d) a set of legal operations that can be applied to each state. Such an approach can often be understood better by conceptually regarding states as nodes and operations as arcs.
The History of AI 5 • By way of example in a chess game: • (a) initial state: initial state of chess board. • (b) final state: checkmate. • (c) ability to detect final state: ability to detect checkmate. • (d) set of legal operations: legal moves of chess.
Generate & Test 1 • The simplest form of state space search is that of Generate & Test. • Such an approach involves typically three stages, those of ... • (a) Generating a possible solution in the form of a new state. • (b) Ascertaining whether the new state is indeed the final state. • (c) If new state is the final state terminate, otherwise • repeat steps a, b and c.
Generate & Test 2 Two forms of generate and test exist: Depth-first Search & Breadth-first Search. Both fall foul of the ‘combinatorial explosion’, caused by the exponential growth of the nodes irrespective of the order of generation. Consequently exhaustive search is only feasible when the search space is very small. For larger spaces the search needs to be guided. Guided searches are normally referred to as Heuristic Searches. Searches of this nature utilise domain specific knowledge called heuristics.
Guided Searches Guided searches are normally referred to as: Heuristic Searches. Searches of this nature utilise: domain specific knowledge called heuristics.
Exercise 1 Attempt to draw a state space for the famous missionaries and cannibals problem
The Development of Expert Systems 2 Stanford Profesor, Terry Winograd developed the Shrdlu expert system which was able to understand a subset of English an manipulate wooden blocks. Soon after came numerous other systems targeting a diversity of domains. Researchers became aware that the representation of knowledge was central to achieving a truly intelligent system. Thereafter numerous formalisms were proposed.
The History of AI Research In 1973 a report by Sir James Lighthill concluded that AI work within the UK was unproductive. There ensued a removal of government funding. Consequently the US and Japan dominate AI research. More recently attempts to rectify this have been made through Alvey funding. In later years expert systems have emerged which offer a high level of performance in complex domains. Examples include XCON & MECHO
Can Computers Think? • Throughout the evolution of artificial intelligence there have • been many opponents to the whole concept of machines generating • anything original. • Many objections have been raised. Turing has summarised most of these in his classic paper ‘Computing Machinery and Intelligence’. • Theological objection, suggests only the possession of a soul permits thought, hence machine nor animals can think. • Mathematical objection, based on Godel’s theorem claims limitations to the power of artificial systems. • Lady Lovelace’s objection, Raphael amongst others claims that a computer can only do what it is told and thus it cannot have pretentions to originate anything.
Contradicting the Objections • Numerous examples, however, may be cited to contradict this. • Samuels checker program, primitive learning capacity • Lenats AM, which identified new maximally divisible numbers not considered by most mathematicians. • Prospector, which was claimed to be in error in certain circumstances, but was eventually proven to be right.
More Objections • Arithmetic Machine objection, computer is little more than a fast arithmetic machine. Of course computer can achieve more than merely arithmetic operations SHIFT, READ, COMPARE, LOAD etc. • Informality of Behaviour objection, impossible to detail set of rules which indicate how a person should act in all possible situations. • Sensory Perception objection, humans have senses not available to machines sight, touch, smell, ESP. • Heads in Sand objection, too horrendous even to contemplate that computers could think.
Can Computers Think? II Thought, origination of new knowledge.Consider a computer is given the following pieces of knowledge: Elephants are large and grey Clyde is an elephant. Conceptually we can think of this knowledge as a graph. If in addition we armed the computer with the technique: properties may be inherited following the directed arcs. Hence it could conclude a new item of knowledge Clyde is Large and Grey Is this equivalent to thought? The computer only used techniques we equipped it with, but after all we only use skills we acquire from our environment. Can a computer exhibit emotions & also have morals?
The Structure of an IKBS I • An intelligent Knowledge Based System (IKBS) can • generally be thought of as consisting of three subordinate parts. • While the structure of an IKBS has become all too well • established the terminology has not. John Fox describes the • three parts as: • “the triptych of data, the knowledge base and the host program”. • Nilsson, however, considers the IKBS to be comprised of • a global database, • a set of operators on the database and • a control system for deciding when to apply such operators.
The Structure of an IKBS 2 • Michie describes the same structure as: • “ ... corpus of knowledge and a comparatively simple mechanism • for applying the knowledge in an opportunistic way to solve the • problem”. • The terminology I shall adopt is similar to that commissioned by • Davis & King in “An Overview of Production Systems”. • The three components being those of: • a rulebase, • database and • an inference engine.
The Structure of an IKBS III While these may not always be distinct, functionally they are certainly accounted for. To quote Fox:“they are essential in the same sense that a reference signal a comparator and a feedback loop are essential to a control system.”
The Rulebase I The Rulebase consists of a set of production rules. The Production rule is the mechanism which is generally employed to represent the domain experts knowledge. In their simplest form each rule consists of: a left hand side often referred to as the antecedent and a right hand side often referred to as the consequent. The ruleset will have a predetermined ordering which will be utilised by the interpreter.
The Rulebase II • Let us look at a typical rule: • ANTECEDENT >----------X----------> CONSEQUENT • HOT AND SUNNY >---------0.8---------> GOOD DAY • The rule can be interpreted as a simple if ... then construct. • Thus: IF it is hot and sunny THEN we can conclude that it is • a good day, with a degree of certainty of 0.8. • Rules will vary in format depending on the actual system. • Rules may or may not have an associated certainty factor. • Some systems impose constraints on the form of the antecedent & the consequent. • Specific systems permit only a single clause on the left hand side whilst others only allow a single consequent.
The Rulebase III • Furthermore the same rule may be interpreted in completely • different manners by two different inference engines. • One inference engine could interpret our aforementioned rule • as follows ... • IF Hot and Sunny THEN add Good Day to database • while another could understand it as ... • IF Hot and Sunny THEN replace Hot and Sunny in database • by term Good Day.
The Database I The database at any given instance contains a set of symbols that represent or reflect the state of the world. The database is dynamic. A change in the contents of the database corresponds to a change in the world the expert system models. As with the rulebase, the interpretation of the database is inference engine dependent. As Davis and King suggest if the IKBS were being used to explore symbol processing aspects of human cognition then the contents of the database would be understood to represent the contents say of the short term memory.
The Database II When the application is that of a knowledge based expert the database is assumed to contain facts and assertions about the domain in question. Such a database would have no restrictions on its size or complexity unlike the former which would have an upper limit of seven or nine items. The databases can be thought of as the sole storage medium for an IKBS. This concept is referred to as the:'unity of data and control store’.
The Database III Every part of the database is accessible by every rule in the rulebase. Nilsson emphases the fact that the database acts as a communications channel. Rules cannot communicate directly with each other, but rather only indirectly via the contents of the database. Strong analogies exist here with the monitor construct of Pascal Plus. Rules can be thought as sharing the contents of the database. Only one rule may access database at a particular instance hence mutual exclusion.
The Inference Engine I Sometimes referred to as theInterpreter. It identifies the set of rules which may be applied or ‘triggered’ at a particular instance, it then subsequently selects one such rule and executes it. In its simplest form it can be regarded as a select execute loop. Each time a rule is executed the contents of the database changes. Consequently, the next time the select-execute loop is entered a complete re-evaluation of the rulebase is performed, every rule being inspected by the inference engine as a potential contender for execution.
Inference Engine II In the past researchers suffered from the misconception that the power of an IKBS lay in the inference engine. Minsky and Papert refer to this as the power strategy. More recently this has been refuted. Fox indicates that it is now widely accepted that quite unsophisticated algorithms are sufficient to provide the necessary control. Feigenbaum in agreement states “...power exhibited ... is primarily a consequence of the specialist knowledge employed by the agent and only very secondarily related to ... the power of the inference method.” He goes on to say: “Our agents must be knowledge rich, even if they are methods poor.”
The Operation of an IKBS I Assume the Rulebase contains the following rules: 1 ATTRACTIVE AND GOOD --> ELIGIBLE PERSONALITY 2 MALE AND BUTCH --> ATTRACTIVE 3 FEMALE AND PRETTY --> ATTRACTIVE 4 TALL AND STRONG AND NOT THIN --> BUTCH AND NOT FEMININE 5 SMALL AND FEMININE --> PRETTY 6 FEMALE AND RESERVED --> FEMININE 7 FUNNY OR WITTY --> GOOD PERSONALITY
The Operation of an IKBS II • and that the database initially contains the • following facts: • FUNNY • FEMALE • RESERVED • SMALL • What can we deduce?
The Operation of an IKBS III • Assume that the Inference Engine ... • RULE ORDER CONFLICT RESOLUTION STRATEGY • RE EVALUATION FROM RULE 1 • Notice that when discovering what we can deduce from the limited • knowledge we have at various points several rules may be • triggered. This set of rules are referred to as the ‘conflict set’. • The inference engine chooses which of the set to actually fire. • The mechanism used varies from inference engine to inference • engine. Indeed an interpreter may have several techniques to • choose from. The strategy employed is called the conflict • resolution strategy. • This particular IKBS employs a rule order conflict resolution • strategy. Consequently that rule with the lowest rule • number is fired.
The Operation of an IKBS IV In the first pass through the rulebase rule 6 will be activated adding FEMININE to the database. Rule 6 is marked indicating that it should not be reevaluated. Because a change has occurred the entire rulebase needs to be reevaluated. Where should such reevaluation commence from? In this case rule 1 others may have selected rule 7. Rather than reevaluate every rule (except 6) an optimisation is sometimes employed, namely only reevaluate rules effected by the change in the database, rules 4 & 5. Would this cause problems? Rule 7 would never get fired. The optimisation can thus be employed with the proviso that if the effected rules can not be triggered then consider any unmarked rule.
The Operation of an IKBS V • Other more sophisticated conflict resolution strategies exist. • To name but a few • Generality order, most specific rule applied. • Recency order, most recently executed rule applied. • Cost order, least computationally rule applied.
Forward Reasoning Take the known facts and try to match against the LHS of a rule. This technique is commissioned when trying to discover what we can deduce. Sometimes called forward chaining.
Backward Reasoning Take the goal state and decide what needs to be true for it to hold. We employ this technique when trying to decide if eligible. This method is often goal directed inference or backward chaining. Attempt to design an IKBS which will be able to decide if a shape is a triangle and if so more specifically what type? The database may initially contain knowledge like: 3 SIDES 2 ANGLES EQUAL
The Knowledge Life Cycle I In a similar vein as the software life cycle we can identify an analogous knowledge life cycle. The first state involves discovering whether the particular problem in hand is amenable to solution via an IKSB approach. Should An IKBS Approach Be Used? A large number of problems may not require a computerised solution. Furthermore numerous problems may be solvable more readily via a conventional approach as opposed to an Expert System approach. According to Waterman an expert system approach should only be considered if: “expert system development is possible, justified and appropriate”.
The Knowledge Life Cycle II • The development of an IKBS system is regarded as being • possible if the problem exhibits the following attributes. • The problem requires merely cognitive skills. • No requirement for physical skills or manual dexterity. • The problem does not require common sense, AI systems areinappropriate where the application relies heavily upon common sense reasoning. • The problem must be well understood, if the problem is poorly understood then there is little likelihood of asolutionbeing obtained.
The Knowledge Life Cycle III The application should not be too complex, the larger the application domain the greater degree of knowledge required (appears exponential) system less likely to demonstrate an acceptable level of performance. Experts must exist so that the knowledge can be extracted from them and subsequently encoded in the expert system. Consensus among experts regarding solution. Expert must be able to articulate techniques.
The Knowledge Life Cycle IV • The fact, however, that an IKBS solution is possible is not in • itself sufficient. Possible justifications are as follows: • Economic, saves money manpower, reduces maintenance costs. • Skill shortage, skill needs to be preserved due to scarcity or loss of staff. • Skill distribution, skills can be employed at numerous sites. • In general an IKBS approach seems appropriate when the • problem is non trivial, yet of manageable size, and requires • the availability of symbol manipulation and heuristic • problem solving techniques.
The Knowledge Life Cycle V • The knowledge life cycle consists of several identifiable discrete • stages. • According to Buchannon it consists of • Identification, what important aspects problem. • Conceptualisation, what concepts required to produce solution. • Formalisation, what knowledge required. • Implementation, how to represent this knowledge. • Testing, how can quality of knowledge be tested.
The Knowledge Life Cycle VI • I tend to regard the knowledge life cycle as consisting of: • 1ESTABLISHING PROBLEM SUITABILITY. • 2 KNOWLEDGE ACQUISITION • 3 KNOWLEDGE REPRESENTATION • 4 TESTING • 5 UTILISATION • 6 EVALUATION • 7 DEATH
University College Dublin DEPARTMENT OF COMPUTER SCIENCE COMP 4.19Multi-Agent Systems(MAS) Lectures 5&6
Distributed Artificial Intelligence Distributed Artificial Intelligence(DAI) :-Endeavours to achieve Intelligent Systems not by constructing a large Knowledge-Based System, but rather by partitioning the knowledge domain and developing 'Intelligent Agents',each exhibiting expertise in a particular domain fragment. This group of agents will thereafter collectively work towards the solution of global problems.
The Co-operating Experts Metaphor • This solution of problems by a group of agents, providing mutual assistance as and when necessary is often referred to as the..... • "Community of Co-operating Experts Metaphor" • Smith and Davis, Lenat, Hewitt • Proponents of this philosophy believe that reciprocal co-operation is the cornerstone of society.
Social Agents Domain Specific Knowledge Base Q? R AND P -> Q M 2.4 3 M? P? 6 R 4 P 6 L 5 S 4 2 M -> P L OR S -> M S? Aquuaintance Model L? R? 4 5
Why Distributed Artificial Intelligence? • Mirrors Human Cognition • Potential Performance Enhancements • Elegantly Reflects Society • Incremental Development • Increased Robustness • Reflects Trends in Computer Science in General • Strong Analogies to Decompositional Techniques employed in Software Engineering