1 / 34

Chapter3 Steps towards Artificial Intelligence - Marvin Minsky

Chapter3 Steps towards Artificial Intelligence - Marvin Minsky. 발표: 인지과학 협동과정 99132-503, 김수경 지도교수: 장병탁 선생님. 1. Introduction. Five main areas: Search : 우리가 시키는 것만 하는 컴퓨터이지만 우리 자신이 정확한 문제해결방법을 모르는 상황에서 , we may program a machine to

Download Presentation

Chapter3 Steps towards Artificial Intelligence - Marvin Minsky

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter3 Steps towards Artificial Intelligence - Marvin Minsky 발표: 인지과학 협동과정 99132-503, 김수경 지도교수: 장병탁 선생님

  2. 1.Introduction • Five main areas: Search: 우리가 시키는 것만 하는 컴퓨터이지만 우리 자신이 정확한 문제해결방법을 모르는 상황에서 ,we may program a machine to search through some large space of solution attempts.-불행히 이런 프로그램의straightforward한 수행은 너무 비효율적이다. Pattern Recognition Techniques: 으로 문제해결시도를 필요한 범위에만 제한한다면 효율성이 훨씬 증진될 수 있다. Learning: 곧,earlier experiences를 바탕으로 search의 방향을 지정해 준다면 효율성은 한층 배가된다. Planning: 곧,상황에 대한 실질적인 분석을 통해, machine은 originally given search를 보다 적확한 exploration으로 대치하게 됨으로써 보다 근본적인 향상을 가져온다. Induction: rather more global concepts of how one might obtain intelligent machine behavior.

  3. 2.The Problem of Search • 어떤 범위에서 문제해결방식을 찾아낼 것인가 하는 효율성의 문제 • complete anlysis의 현실적 한계 :복잡한 문제해결의 경우,(chess와 같이) incomplete analysis로도search를 효율적으로 수행하게 만드는 기술을 개발해야 한다.

  4. 2.1 Relative Improvement, Hill-Climbing, and Heuristic Connections • 우리가 어떤 comparator를 통해 여러 trial들의 결과를 비교 분석시키면 우리는 partial success에 대한 정보(Relative Improvement의 감지)를 갖게 된다. 이 정보로 search의 pattern을 더 확실한 방향으로 끌고 나가자면 heuristically related points를 어떤 식으로든 함께 묶어주는 부가적인 structure(곧,Heuristic Connection)가 search space안에 필요하다. Relative Improvement를 감지해 낼 수 있고,search space에 대한 structural knowledge가 있다면, Hill-climbing이 실현가능해진다.

  5. 2.2Hill-Climbing • input x1,…,xn -> output E(x1,…,xn) 에서 input value를 adjust 하여 E를 maximize하고 싶다면; 그런데 E에 관한 어떤mathematical description도 주어지지 않는다면; • The obvious approach is to explore locally about a point, finding the direction of steepest ascent. • 의의: the sampling effort grows, in a sense, only linearly with the number of parameters. • “adaptive” or”self-optimizing” servomechanism

  6. 2.3 Troubles with Hill-Climbing • local peak 을 만나는 것 • 더 나아가, mesa phenomenon에 빠지는 것

  7. 3.The Problem of Pattern Recognition <Pattern Recognition Methods> :기계의 능력이 허락하는 여러 방법들을 감안하여 효율적으로 문제상황을category별로 분류하는 것 • The simplest Methods: matching the objects against standards or prototypes • Property-List Method : subjecting each object to a sequence of tests, each detecting some heuristic importance • 야기되는 문제 둘 1)inventing new useful properties 2)combining many properties to form a recognition system

  8. 3.1 Teleological Requirements of Classification • 문제해결이 목적인 상황에서 문제해결방식에만 매달릴 수는 없으므로 실질적으로는 이상적인 teleological definitions를 포기한 채, 다소의 위험을 안고 heuristically effective한 practicalapproximation으로 대신하는 경우가 많다.

  9. 3.2 Patterns and Description • Pattern : A set of objects which can in some useful way be treated alike • “What patterns would it be useful for a machine working on such program? • 각각의 defined class들에 이름(symbolic expression)을 부여하는 방법론 • conventional name -arbitrary symbols are assigned to classes • description or computed name - constructed for classes by processes which depend on class definitions.problem area와 관련한 구조체의 정보를 담고 있을 때 유용 • model - sort of active description

  10. 3.3 Prototype-Derived Patterns ex)reading printed characters : 왜곡된 문자들의 경우라도 fixed set of prototypes들에 비추어 분류, 인식한다. • Normalization : size나 position을 바로잡는다-similar figure나 transformed figure를 만들어 무게중심 등을 찾아서 • Template Matching Process • uniform normalization method를 찾기 힘든 관계로 형태상의 요소요소를 찾아내는 방법을 많이 쓴다. • 문제점: 어려운 문제에 적용하기에는 too limited in conception

  11. 3.4 Property Lists and “Characters” • a step beyond the template method • property : two-valued function(0 or 1) which divides figures into two classes • n개의 property가 있다면 2n개의 subclass, ‘and’나 ‘or’로 연결되는22n개의 combining properties가 생기는 셈. • Character :If the given properties are placed in a fixed order then we can represent any of these elementary regions by a vector, or string of digits. • Character는 conventional name보다 나은 형태로 이름을 대용할 수 있고, 나아가 list라고 할 수 있는 가장 기본적인 형태의 description을 대신한다.

  12. 3.5 Invariant Properties • good property의 우선적 조건이 된다. • that it be invariant under commonly encountered equivalence transformations • Pitts & McCulloch(1947)가 제시하는 general technique for forming invariant properties from non invariant ones (p57) • We have to be sure that when different representatives are chosen from a class, the collection [F] will always be the same in each case

  13. 3.6 Generating Properties • Selfridge(1955) p58 1)기계에 기본적인 몇가지 transformation A1,…,An을 제공 2)각각의 변형들은 다시 여러 방법이 무작위로 적용되어 변형된다. 3)변형의 결과를 가지고 온 sequence가 무의미 했다면 버린다. 4)이 과정을 되풀이 하는 과정에서 컴퓨터는 주어진 일련의 분배함수에 의해 유의미한 sequence를 찾아낸다. 5)이 sequence들을 통해 새로운 의미있는sequence를 만들어낸다. • 그렇다면, 비슷하지만 같지는 않은 sequences를 어떻게 만들 것인가? • We shall merely build sequences from the transition frequencies of the significant sequences • crucial point of learning

  14. 3.7 Combining Properties • Problem area의 정곡을 찌르는 small set of properties를 찾아내는 것이 어려우므로, 대신 1)Find a large set of properties each of which provides a little useful information. 2)Find a way to combine them. • Choose for each class, a typical character • Then, use some matching procedures • 3.7.1 “Bayes Nets” • 3.7.2 “Bayes Nets”에의 Random Net 적용가능성

  15. 3.7.3 Articulation and Attention-Limitation of the Propery-list Method • Property-List scheme은 그 size의 고정성으로 인해 distinction들의 detail들에 한계가 있다 • compound scene의 처리가 약하고, extension의 지시도 범위가 작고 부자연스럽다. • 문제의 해결에 필요한 flexibility의 부여- articulate! 1)A list of the primitive objects in the scene : L 2)A statement about the relations among them : R • formalized description : (R, L) • 의의- a fixed set of pattern-recognition techniques을 반복 적용시킴으로써 arbitrarily complex description이 가능해졌다는 점

  16. 4.Learning System • To implement “basic learning heuristic”, generalize on past experience! • Use success-reinforced decision models • Reinforcement • Build more autonomous “secondary Reinforcement”

  17. 4.1 Reinforcement • A Reinforcement Process : “Reinforcement operator” Z 의 투여로 인해 system의 특정 behavior의 측면을 다음번에는 더 강화시킴 • 동물행동의 reward나 extinction과 유사 • 의의 : behavior를 initiate 하는 것이 아니라 trainer가 도모하는 방향에 따라 이전 경험에서 필요한 것만 선택, 응용할 수 있다는 점 (p 67, Fig.8) • Problem of extinction(Unlearning) : 다른 property들의 선택이 더 낫다는 걸 알았을 때, 이전system의 기록들을 새로운 것에 맞추어 translate하거나 버려야하는 사태에 직면

  18. 4.2 Secondary Reinforcement and Expectation Models • Trainer에 대한 기계의 의존성을 극복하는 방법 • To have the machine learn to generalize on what the trainer does(p69,fig.9) • The new unit U is a device that learns which external stimuli are strongly correlated with the various reinforcement signals, and respond to such stimuli by reproducing the corresponding reinforcement signals. • Secondary Reinforcement의 chaining을 계속하여 보다 강력한 scheme을 만들어갈 수 있다.

  19. 4.3 Prediction and Expectation • The evaluation unit U : imaginary situation의 진단을 가능하게 함 • search의 수고를 덜어주며, plan의 능력을 가지게 됨 • P for predicting a description of the likely result • In the reinforcement of mechanisms for confirmed novel expectations, we may find the key to simulation of intellectual motivation. • 4.3.1 Samuel’s program for Checkers • the simplest scheme : to use weighted sumof some selected set of “property” functions of the position-mobility, advancement, center control,etc. • backing up

  20. 4.4 The Basic Credit-assignment Problem for Complex Reinforcement Learning System • How can we assign credit for the success among the multitude of decisions? • Newell’s note • Learning이 가능하려면 각각의 게임에서 더 많은 정보가 산출되어야 한다. 이것은 문제를 component들로 쪼갬으로써 가능하다 • The unit of success is the goal. • If the goal is achieved, its subgoals are reinforced. ;If not, they are inhibited.

  21. 4.4.1 Friedberg’s Program-Writing Program • an important example of comparative failure in this credit-assigning matter • “compute the AND of two bits in storage and put the result in an assigned location” • In changing just one instruction at a time, the machine had not taken large enough steps in its search through program space. • Minsky’s conviction • No scheme for learning, or for pattern recognition, can have very general utility unless there are provisions for recursive, or at least hierarchical uses of previous results.

  22. 5. Problem-Solving and Planning • Problem-Solving 1)Subproblems Selecting • estimates of the relative difficulties • estimates of centrality of the different candidates for attention 2)Choosing methods appropriate to the selected problems • Planning -But for really difficult problems,even these step-by-step heuristics for reducing search will fail, and the machine must have resources for analyzing the problem structure in the large.

  23. 5.1 The “Logic Theory” Program of Newell, Shaw, and Simon • LT- a first attempt to prove theorems in logic, by frankly heuristic methods • discovering proofs in the Russel & Whitehead system for the propositional calculus • 5axioms, 3rules of inference • the heuristic technique of working backwards(p75) 1)a similarity test to reduce the work in step 4 2)a simplicity test to select apparently easier problems from the problem list 3)a strong nonprovability test to remove from the problem list expressions which are probably false and hence not provable

  24. 5.2 Heuristics for Subproblem Selection • In the basic program for LT : 새 문제가 발생하면 problem list의 끝으로 보내고 여러 문제들을 발생 순서에 따라 해결한다. • In more complex systems :문제의 (1)centrality와 (2) difficulty 등을 고려하여 적절한 양의 노력을 투여 • 5.2.1 Global Methods • at each step, looking over the entire structure : Shannon(1955)’s machine - electrical-anlogy • 5.2.2 Local and “Hereditary” Methods • the effort assigned to a subproblem is determined only by its immediate ancestry : Newell,Shaw,Simon(1958b) - complex exploration proposed for->non-numerical stop-condition. Here, the subproblems inherit sets of goals to be achieved

  25. 5.3 “Character-Method” Machines • 어떤 방법을 먼저 시도할 것인가? • depends upon our ability to classify or characterize problems 1)First, compute the character of our problem(by using some pattern recognition technique) 2)Then,consult a “character method” table or other device which methods are most effective on problems of that character • If the characters have too wide a variety of values:reduce the detatil of information,e.g. by using only a few imprtant properties

  26. 5.4 Planning • A successful division will reduce the search not by mere fraction, but by a fractional exponent. • 각각의 node에 10개 가지를 가진 그래프에서 20단계의 search를 수행하면: 1020trials • 여기에 4lemmas(or sequential subgoals)를 투여하면: 5*104로 줄어든다. • The most straight-forward concept of planning :Using a simplified model • Another aid for planning :semantic - interpretation of the current problem within another system, with which we are more familiar.

  27. 5.4.1 The “Character-Algebra” Model • Character-method matrix: an array of entries which predict with some reliability what will happen when methods are applied to problems • 5.4.2 Characters and Differences • GPS (Newell,Shaw, Simon): using a notion of difference between two problems where we speak of a single problem.->our goal is to reduce the differences! • The underlying structure: character-method machine • “means-end”analysis: the characterization depends on 1) the current problem expression & 2)the desired end result • General planning heuristic :p83

  28. 6. Induction and Models 6.1 Intelligence • We should not let our inability to discern a locus of intelligence lead us to conclude that programmed computers therefore cannot think .

  29. 6.2 Inductive Inference • Suppose that we want a machine which will essay to produce a description of a world -to discover its regularities or laws of nature. • Our task is to equip our machine with inductive ability-with methods - with methods which can use to construct general statements about events beyond its recorded experience. <Grammatical induction-schemes of Solomonoff> • language를 빗대어 설명 • induction problem을 discovery of the grammar로 (grammar of the language=the primitive expressions + the rules)

  30. 6.3 Models of Oneself • [input: question]  [output: correct answer] • The output of this submachine as well as the input must be coded descriptions of corresponding external events. Seen through this pair of encoding and decoding channels,the internal submachine acts like the environment, and so it has the character of a “model”. The inductive inference problem may then be regarded as the problem of constructing such a model. • 이렇듯, 기계도 사람처럼 dual character를 가지고 있다면 intelligent machine이 그냥 ‘기계’의 수준을 넘어설 수 있는 가능성이 보인다.

  31. THE END

More Related