chapter 4 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Chapter 4 PowerPoint Presentation
Download Presentation
Chapter 4

Loading in 2 Seconds...

play fullscreen
1 / 66

Chapter 4 - PowerPoint PPT Presentation


  • 120 Views
  • Uploaded on

Chapter 4. Methods of Inference 知識推論法. 4.1 Deductive and Induction (演繹與歸納). Deduction( 演繹 ): Logical reasoning in which conclusions must follow from their premises. Induction( 歸納 ): Inference from the specific case to the general. Intuition( 直觀 ): No proven theory.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Chapter 4' - colette-barton


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
chapter 4

Chapter 4

Methods of Inference

知識推論法

4 1 deductive and induction
4.1 Deductive and Induction (演繹與歸納)
  • Deduction(演繹): Logical reasoning in which conclusions must follow from their premises.
  • Induction(歸納): Inference from the specific case to the general.
  • Intuition(直觀): No proven theory.
  • Heuristics(啟發): Rules of thumb (觀測法) based upon experience.
  • Generate and test: Trial and error.

S.S. Tseng & G.J. Hwang

slide3
Abduction(反推): Reasoning back from a true conclusion to the premises that may have caused the conclusion.
  • Autoepitemic(自覺、本能): Self-knowledge
  • Nonmonotonic(應變知識): previous knowledge may be incorrect when new evidence is obtained
  • Analogy(類推): based on the similarities to another situation

S.S. Tseng & G.J. Hwang

syllogism
Syllogism (三段論)
  • Syllogism(三段論)is simple, well-understood branch of logic that can be completely proven.
    • Premise(前提): Anyone who can program is intelligent
    • Premise(前提):John can program
    • Conclusion(結論): Therefore, John is intelligent.
  • In general, a syllogism is any valid deductive argument having two premises and a conclusion.

S.S. Tseng & G.J. Hwang

categorical syllogism
Categorical Syllogism(定言三段論)

定言命題的型態

S.S. Tseng & G.J. Hwang

standard form
三段論的標準形態(Standard form)

大前提:所有M為P

小前提:所有S為M

結論:所有S為P

- P代表結論的「謂詞」(Predicate),又稱為「大詞」(Major term)

- S代表結論的「主詞」(Subject),又稱作「小詞」(Minor term)。

- 含有大詞的前提稱為「大前提」(Major premise);

- 含有小詞的前提稱為「小前提」(Minor premise)。

- M稱為「中詞」(Middle term)

S.S. Tseng & G.J. Hwang

slide7
Mood(模式)
  • Patterns of Categorical Statement
  • 4種AAA模式

S.S. Tseng & G.J. Hwang

slide8
ex: AAA-1
    • 所有M為P

所有S為M

∴所有S為P

  • We use decision procedure(決策程序) to prove the validity of syllogistic argument
  • The decision procedure for syllogisms can be done using Venn Diagrams(維思圖)

S.S. Tseng & G.J. Hwang

ex decision procedure for syllogism aee 1
ex: Decision procedure for Syllogism AEE-1

所有M為P

沒有S為M

∴沒有S為P

S.S. Tseng & G.J. Hwang

general rule under some quantifiers
General Rule under “some” quantifiers

1. If a class is empty, it is shaded.

2. Universal statement, A and E, are always drawn before particular ones.

3. If a class has at least one member, mark it with a *.

4. If a statement does not specify in which of two adjacent classed an object exists, place a * on the line between the classes.

5. If an area has been shaded, no * can be put in it.

S.S. Tseng & G.J. Hwang

ex decision procedure for syllogism iai 1
ex: Decision procedure for Syllogism IAI-1

某些P為M

所有M為S

∴某些S為P

S.S. Tseng & G.J. Hwang

4 2 state and problem spaces
4.2 State and problem spaces(狀態與問題空間)
  • Tree(樹狀結構): nodes, edges
      • Directed or undirected
  • Digraph(雙向圖): a graph with directed edges
  • Lattice(晶格): a directed acyclic graph

S.S. Tseng & G.J. Hwang

slide13
A useful method of describing the behavior of an object is to define a graph called the state space. [state(狀態) and action(行動)]
    • Initial state
    • Operator
    • State space
    • Path
    • Goal test
    • Path cost

S.S. Tseng & G.J. Hwang

finite state machine
Finite State Machine(有限狀態機器 )
  • Determining valid strings WHILE,WRITE, and BEGIN

S.S. Tseng & G.J. Hwang

finding solution in problem space
Finding solution in problem space
  • State space(狀態空間) can be thought as a problem space(問題空間).
  • Finding the solution to a problem in a problem space involves finding a valid path from start to success( answer).
  • The state space for the Monkey and Bananas Problem
  • Traveling salesman problem(旅行推銷員問題)
  • Graph algorithms, AND-OR Trees, etc.

S.S. Tseng & G.J. Hwang

ex monkey and bananas problem
Ex: Monkey and Bananas Problem
  • 假設:
    • 房子裡有一懸掛的香蕉
    • 房子裡只有一張躺椅跟一把梯子
    • 猴子無法直接拿到香蕉
  • 指示:
    • 跳下躺椅
    • 移動梯子
    • 把梯子移到香蕉下的位置
    • 爬上梯子
    • 摘下香蕉
  • 初始狀態:
    • 猴子在躺椅上

S.S. Tseng & G.J. Hwang

ill structured problem
Ill-structured problem(非結構化問題)
  • Ill-structured problems(非結構化問題) have uncertainties associated with it.
    • Goal not explicit
    • Problem space unbounded
    • Problem space not discrete
    • Intermediate states difficult to achieve
    • State operators unknown
    • Time constraint

S.S. Tseng & G.J. Hwang

slide19
Ex:旅遊代理人

S.S. Tseng & G.J. Hwang

4 3 rules of inference
4.3 Rules of Inference(規則式推論)
  • Syllogism(三段論) addresses only a small portion of the possible logic statements.
  • Propositional logic

p  q

p______

q

Inference is called direct reasoning (直接推論), modus ponens (離斷率), law of detachment (分離律) , and assuming the antecedent (假設前提).

S.S. Tseng & G.J. Hwang

slide21

Truth table for Modus Ponense(離斷率)

pqp→q(p→q)p(p→q)  p→q

TTTTT

TFFFT

FTTFT

FFTFT

S.S. Tseng & G.J. Hwang

slide22

p→q

p

∴q

Law of Inference Schemata

1.Law of Detachment

2.Law of the Contrapositive

3. Law of Modus Tollens

4.Chain Rules(Law of the Syllogism)

5.Law of Disjunctive Inference

6.Law of the Double Negation

p→q

∴~q→~p

p→q

~q

∴~p

P→q

q→r

∴p→r

pq

~p

∴q

pq

~q

∴p

~(~p)

∴p

S.S. Tseng & G.J. Hwang

slide23

7.De Morgan’s Law

8.Law of Simplification

9.Law of Conjunction

10.Law of Disjunctive Addition

11. Law of Conjunctive Argument

~(pq)

∴~p  ~q

~(pq)

∴~p ~q

~(pq)

∴~q

pq

∴p

p

q

∴pq

p

∴pq

~(pq)

q

∴~p

~(pq)

p

∴~q

Table 3.8 Some Rules of Inference for Propositional Logic

S.S. Tseng & G.J. Hwang

resolution in propositional logic
Resolution in propositional Logic(命題邏輯分解)

F: Rules or facts known to be TRUE

S: A conclusion to be Proved

  • Convert all the propositions of F to clause form.

2. Negate S and convert the result to clause form. Add it

to the set of clauses obtained in step 1.

3. Repeat until either a contradiction is found or no progress can be made:

(1) Select two clauses.

Call these the Parent clauses.

S.S. Tseng & G.J. Hwang

slide25
(2) Resolve them together.

The resulting clause, called the resolvent, will be the

disjunction of all of the literals of both of the parent

clauses with the following exception:If there are any

pairs of literals L and ~L. Such that one of the parent

clauses contains L and the other contaions ~L, then

eliminate both L and ~L from the resolvent.

(3) If the resolvent is the empty clause, then a

contradiction has been found. If it is not, then add it

to the set of clauses available to the procedure.

S.S. Tseng & G.J. Hwang

slide26

Given AxiomsConverted to Clause Form

p p

(p  q)  r   ~p ~q  r

(s  t)  q ~s  q

~t  q

t t

1.

2.

3.

4.

5.

p =下雨

q = 騎車

s = 路線熟悉

t = 路途遠

r = 穿雨衣

S.S. Tseng & G.J. Hwang

slide27

~p~qr

~r

p

~p ~q

~q

~t q

t

~t

Resolution in Propositional Logic

S.S. Tseng & G.J. Hwang

resolution with quantifiers
Resolution with quantifiers

Example(from Nilsson):

Whoever can read (R) is literate (L).

Dolphins (D) aren’t literate (~L).

Some dolphins (D) are intelligent (I).

To prove:Some who are intelligent (I) can’t read (~R).

S.S. Tseng & G.J. Hwang

translating
Translating:

x [ R ( x ) → L ( x ) ]

x [ D ( x ) → ~L ( x ) ]

x [ D ( x ) & I ( x ) ]

To prove: x [ I ( x ) & ~ R ( x ) ]

S.S. Tseng & G.J. Hwang

slide30

(1) - (4):

x [~ R ( x ) OR L ( x ) ] & y [ ~ D ( y )

OR ~ L ( y ) ] & D ( A ) & I ( A ) &

z [~ I ( z ) OR R ( z ) ]

(5) - (9):

C1=~R(x)ORL(x)

C2=~D(y)OR~L(y)

C3=D(A)

C4=I(A)

C5= ~I(z)ORR(z)

S.S. Tseng & G.J. Hwang

slide31
The second order logic can have quantifiers that range over function and predicate symbols
  • If P is any predicate of one document
    • Then
    • x =y  (for every P [P(x)  P(y) ]

S.S. Tseng & G.J. Hwang

slide32

4.4 Inference Chain (推斷鏈)

D3

A2 D2

A1 B C D1 E Solution

inference + inference +… + inference

Chain

Initial facts

backward

chaining

forward

chaining

Infer from initial

facts to solutions

Assume that some solution

is true, and try to prove

the assumption by finding

the required facts

S.S. Tseng & G.J. Hwang

forward chaining
Forward Chaining(前向鏈結):

Rule1: elephant(x) mammal(x)

Rule2: mammal(x)  animal(x)

Fact:John is an elephant.

elephant (John) is true

X=John (Unification)

elephant(x)  mammal(x) 

X’=X=John

mammal(x’)  animal(x’)

  • Unification(變數替代)

The process of finding substitutions for variables to make arguments match.

Mammal(John) is true

animal(John) is true

S.S. Tseng & G.J. Hwang

slide34
Forward Chaining(前向推論)

Rule1:A1 and B1 C1

Rule2:A2 and C1 D2

Rule3: A3 and B2 D3

Rule4:C1 and D3 G

Facts:A1 is true, A2 is true , A3 is true, B1 is true, B2 is true

{A1, A2, A3, B1, B2} match {r1, r3}

fire r1 {A1, A2, A3, B1, B2, C1} match {r1, r2, r3}

fire r2 {A1, A2, A3, B1, B2, C1, D2} match{r1, r2, r3}

fire r3 {A1, A2, A3, B1, B2, C1 D2,D3} match{r1, r2, r3, r4}

fire r4 {A1, A2, A3, B1, B2, C1 D2, D3, G }

GOAL

S.S. Tseng & G.J. Hwang

backward chaining

rule1 : A1 and B1 C1

rule2 : A2 and C1 D2

rule3 : A3 and B2 D3

rule4 : C1 and D3 G

rule5 : C1 and D4 G’

Backward Chaining (反向推論)

facts : A1, A2, B1, B2, A3

1. Assume G’ is true

Verify C1 and D4

Verify A1 and B1

2. Assume G is true

Verify C1 and D3

Verify A3 and B2

Verify A1 and B1

R5

R4

R1

R3

OK

OK

OK

OK

D4 is unknown, ask user.

If D4 is FALSE, give up.

OK

OK

S.S. Tseng & G.J. Hwang

slide36

A1

B1

A2

A3

B2

C1

D2

D3

?

G

GOAL

S.S. Tseng & G.J. Hwang

good application of forward chaining
Good application of forward chaining(前向鏈結)

Goal

Facts

Broad and Not Deep

or

too many possible goals

S.S. Tseng & G.J. Hwang

slide38

Good application of backward chaining(後向鏈結)

Narrow and Deep

GOALS

Facts

S.S. Tseng & G.J. Hwang

slide39

Forward Chaining(前向鏈結)

  • Planning
  • Monitoring
  • Control
  • Data-driven
  • Explanation not facilitated
  • Backward chaining(後向鏈結)
  • Diagnosis
  • Goal-driven
  • Explanation facilitated

S.S. Tseng & G.J. Hwang

analogy
Analogy
  • Try to relate old situations as guides to new ones
  • Consider tic-tac-toe with values as a magic square (15 game)
          • 6 1 8
          • 7 5 3
          • 2 9 4
  • 18 game from set {2,3,4,5,6,7,8,9,10}
  • 21 game from set {3,4,5,6,7,8,9,10,11}

S.S. Tseng & G.J. Hwang

nonmonotonic reasoning
Nonmonotonic reasoning
  • In nonmonotonic system, the theorems do not necessarily increase as the number of axioms increases.
  • As a very simple example, suppose there is a fact that asserts the time. As soon as time changes by a second, the old fact is no longer valid.

S.S. Tseng & G.J. Hwang

4 5 reasoning under uncertainty
4.5Reasoning Under Uncertainty(不確定性推論)
  • Uncertainty can be considered as the lack of adequate information to make a decision.
  • Classical probability, Bayescian probability, Dempster-Shafer theory, and Zadeh’s fuzzy theory.
  • In the MYCIN and PROSPECTOR systems conclusion are arrived at even when all the evidence needed to absolutely prove the conclusion is not known.

S.S. Tseng & G.J. Hwang

slide43

Reason

What value?

Which way?

Correct is on

Value is not stuck

Value is stuck

Correct is 5.4

Correct is 9.2

Equipment error

Statistical fluctuation (波動)

Mis-calibration (刻度)

Value is stuck

Value is stuck in open position

Example

Turn the value off

Turn value-1

Turn value-1 off

Value is stuck

Value is not stuck

Turn value-1 to 5

Turn value-1 to 5.4

Turn value-1 to 5.4 or 6 or 0

Value-1 setting is 5.4 or 5.5 or 5.1

Value-1 setting is 7.5

Value-1 is not stuck because it’s

never been stuck before

Output is normal and so value-1

is in good condition

Error

Ambiguous

Incomplete

Incorrect

False positive (接受錯誤值)

False negative (拒絕正確值)

Imprecise

Inaccurate

Unreliable

Random error

Systematic error

Invalid induction

Invalid deduction

Many different types of error can contribute to uncertainty

S.S. Tseng & G.J. Hwang

slide44
A hypothesis is an assumption to be tested.
  • Type 1 error (false positive) means acceptance of a hypothesis when it is not true.
  • Type 2 error (false negative) means rejection of a hypothesis when it is true.
  • Error of measurement
    • Precision
      • The millimeter(公釐) ruler is more precise than centimeter ruler.
    • accuracy

S.S. Tseng & G.J. Hwang

error induction
Error & Induction

The process of induction is the opposite of deduction

The fire alarm goes off (響起)

∴There is a fire.

An even stronger argument is

The fire alarm goes off & I smell smoke

∴ There is a fire.

Although this is a strong argument, it is not proof that there is a fire.

My clothes are burning

S.S. Tseng & G.J. Hwang

deductive errors
Deductive errors

p→q

q

 ∴ p

If John is a father, than John is a man

John is a man

∴ John is a father

S.S. Tseng & G.J. Hwang

baye s theorem
Baye’s Theorem (貝氏定理)
  • Conditional probability(條件機率), P(A | B) , states the probability of event B occurred. Crash= Brand X(0.6)+ Not X(0.1)=0.7
  • P( X|C) =

P( C | X) P(X) = (0.75)(0.8) = 6

P(C) 0.7 7

  • Suppose you have a drive and don’t know its brand, what is the probability that if it crashes, it is Brand X? non-Brand X?

S.S. Tseng & G.J. Hwang

slide48

Decision Tree for the Disk drive Crashes

Act

Prior

P(Hi)

Choose

Brand X

P(X)=0.8

Don’t Choose

Brand X

P(X’)=0.2

Conditional

P(E | Hi )

No Crash

P(C’ | X’)=0.5

Crash

P(C | X’)=0.5

Crash

P(C | X)=0.75

No Crash

P(C’ | X)=0.25

P(C’∩X’)=0.1

Joint -P(E ∩ Hi )

=P(E | Hi ) P( Hi )

P(C ∩X’)=0.1

P(C’ ∩ X)=0.2

P(C ∩X)=0.6

0.1

0.1

0.2

0.6

Posterior

P(H i | E) = P (E ∩ Hi)

iP(E∩Hi)

P(X’ | C’) =

P(X’ | C) =

P(X | C’) =

P(X | C) =

0.1+0.2

0.1+0.6

0.2+0.1

0.6+0.1

= 1 / 3

= 1 / 7

= 2 / 3

= 6 / 7

S.S. Tseng & G.J. Hwang

slide49

P(Hi | E) =

=

j

j

P(E∩Hi) P(E | H i) P(Hi)

P(E ∩Hj)P(E | Hj) P(Hj)

P(E | Hi)P(Hi)

P(E)

  • Bayes’ Theorem(貝氏定理) is commonly used for decision tree analysis of business and the social science.
  • Used in Prospector expert system to decide favorite sites of mineral exploration

=

S.S. Tseng & G.J. Hwang

hypothetical reasoning and backward induction
Hypothetical Reasoning and Backward Induction.

Probabilities

Prior

Subjective Opinion

of Site - P (Hi)

No Oil

P(O’)=0.4

Oil

P(O)=0.6

P(+)=P(+∩O)+P(+∩O’)=0.48+0.04=0.52

P(-)=P(-∩O)+P(-∩O’)=0.12+0.36=0.48

-Test

P(- | O’)

=0.9

+Test

P(+ | O’)

=0.1

-Test

P(- | O)

=0.2

+Test

P(+ | O)

=0.8

Conditional

Seismic Test Result

P(-∩O’)

=0.36

P(+∩O’)

=0.04

P(-∩O)

=0.12

P(+∩O)

=0.48

Joint -P(E∩H)

=P(E | Hi) P(Hi)

S.S. Tseng & G.J. Hwang

slide51

Probabilities

Unconditional

P (E)

+Test

P(+)=0.52

-Test

P(-)=0.48

Posterior

of Site - P(Hi | E)

P ( E| Hi) P (Hi)

P(E)

No Oil

P(O’|-)

= (9)(4)

0.48

= 3/4

Oil

P(O|-)

= (2)(6)

0.48

= 1/4

No Oil

P(O’|+)

= (1)(4)

0.52

= 1/13

Oil

P(O|+)

= (8)(6)

0.52

= 12/13

=

P(+∩O)

=0.04

P(-∩O)

=0.12

P(+∩O)

=0.48

P(-∩O)

=0.36

Joint -P(E∩H)

=P(Hi | E) P(E)

S.S. Tseng & G.J. Hwang

slide52
Oil release , if successful $1250000
  • Drilling expense -$200000
  • Seismic survey -$50000
  • Expected payoff (success)
      • 846153=1000000 *12/13 – 1000000*1/13
  • Fail
      • -500000= 1000000*1/4- 1000000*3/4
  • Expected payoff (total)
      • 416000= 846153*0.52 – 50000 * 0.48

S.S. Tseng & G.J. Hwang

temporal reasoning and markov chain
Temporal reasoning and Markov chain
  • Temporal reasoning: reasoning about events that depend on time
  • Temporal logic
  • The system’s progression through a sequence of status is called a Stochastic process if it is probabilistic.

S.S. Tseng & G.J. Hwang

slide54
P11 P12
  • P21 P22
  • Where Pmn is the probability of a transition from state m to state n.

S = { P1,P2, …, Pn} where P1+P2+…+Pn= 1

S2 = S1 T

S2 = [0.8,0.2] = [0.2,0.8]

0.1 0.9

0.6 0.4

S.S. Tseng & G.J. Hwang

slide55
Assume 10 percent of all people who now use Brand X drive will buy another Brand X when needed. 60 percent of people who don’t use Brand X will buy Brand X when they need a new drive. Over a period of time, how many people will use Brand X?

S3 = [0.5,0.5], S4 = [0.35,0.65],

S5 = [0.425,0.575], S6 = [0.3875,0.6125]

S7 = [0.40625,0.59375], S8 = [0.396875,0.602125]

Steady state matrix

S.S. Tseng & G.J. Hwang

the odds of belief
The odds of belief
  • “The patient is covered with red spots”
  • Proposition A: “The patient has measles”
  • P(A|B) :(degree of belief that A is true, given B)is not necessarily a probability if the events and propositions can not be repeated or has a math basis.

S.S. Tseng & G.J. Hwang

slide57
The odds on A against B given some event C are odds =P(A|C)/ P(B|C)
  • If B = A’
    • odds =P(A|C)/ P(A’|C) =P(A|C)/ (1-P(A|C) )
  • Likelihood of P = 0.95
    • Odds = .95/(1-.95) = 19 to 1

S.S. Tseng & G.J. Hwang

sufficiency and necessity
Sufficiency and necessity

Bayes’ Theorem is

P(H|E) =

Negation P(H’|E) =

P(E | H)P(H)

P(E)

P(E | H’)P(H’)P(E)

S.S. Tseng & G.J. Hwang

slide59

P(H | E) P(E | H) P(H)

P(H’ | E) P(E | H’) P(H’)

Defining the prior odds on H as

P(H)

P(H’)

P(H | E)

P(H’ | E)

O(H) =

O(H | E) =

S.S. Tseng & G.J. Hwang

slide60
Likelihood ratio

P(E | H)

P(E | H’)

O(H | E) = LS O(H)

odds-likelihood form of Bayes’ Theorem.

The factor LS is also called likelihood of sufficiency because if LS =∞ then the evidence E is logically sufficient for concluding that H is true.

LS=

S.S. Tseng & G.J. Hwang

slide61

P(H | E’)

Likelihood of necessity, LN, is defined similarly to LS as

O(H | E’)P(E’ | H)P(H’ | E’)

O(H)P(E’ | H’)P(H)

O(H | E’) = LN O(H)

LN=0,then P(H | E’) = 0. This means that H must be false

when E’ true. Thus if E is not present then H is false,

which means that E is necessary for H.

LN=

=

=

P(H’)

S.S. Tseng & G.J. Hwang

slide62
LSEffect on Hypothesis

0H is false when E is true or

E’ is necessary for concluding H

Small(0<LS<<1)E is unfavorable for concluding H

1E has no effect on belief of H

Large(1<<LS)E is favorable for concluding H

E is logically sufficient for H or

Observing E means H must be true

S.S. Tseng & G.J. Hwang

slide63

LNEffect on Hypothesis

0H is false when E is true or E is necessary for H

small(0<LN<<1)Absence of E is unfavorable for concluding H

1Absence of E has no effect on H

large(1<<LN)Absence of E is favorable of H

Absence of E is logically sufficient for H

S.S. Tseng & G.J. Hwang

uncertainty in inference chains
Uncertainty in inference chains
  • Uncertainty may be present in rules, evidence used by the rules, or both.

S.S. Tseng & G.J. Hwang

expert inconsistency
Expert Inconsistency

If LS > 1 then P(E | H’) < P(E | H)

1 – P(E | H’) > 1 – P(E | H)

1-P(E | H)

1-P(E | H’)

Case 1:LS>1 and LN <1

Case 2:LS<1 and LN >1

Case 3:LS= LN = 1

LN=

< 1

S.S. Tseng & G.J. Hwang

exercise
Exercise
  • 考慮以下的事實與規則,試以前向鏈結和後向鏈結描述其推論過程。

事實: A1, A2, A3, A4, B1, B2

規則: R1: A1 and A3 --> C2

R2: A1 and B1 --> C1

R3: A2 and C2 --> D2

R4: A3 and B2 --> D3

R5: C1 and D2 --> G1

R6: B1 and B2 --> D4

R7: A1 and A2 and A3 --> D2

R8: C1 and D3 --> G2

R9: C2 and A4 --> G3

目標: G1, G2 and G3

S.S. Tseng & G.J. Hwang