1 / 28

Applied systems based on computability logic

Episode 17. Applied systems based on computability logic. Computability logic as a problem-solving tool Knowledgebase systems based on computability logic Constructiveness, interactivity and resource-consciousness Systems for planning and action based on computability logic

udell
Download Presentation

Applied systems based on computability logic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Episode 17 Applied systems based on computability logic • Computability logic as a problem-solving tool • Knowledgebase systems based on computability logic • Constructiveness, interactivity and resource-consciousness • Systems for planning and action based on computability logic • Applied theories based on computability logic • Computability-logic-based arithmetic

  2. 17.1 Computability logic as a problem-solving tool The original motivations behind computability logic were computability-theoretic: the approach provides a systematic answer to the question “what can be computed?”, which is a fundamental question of computer science. Yet, the uniform-constructive nature of the known soundness theorems for various fragments reveals that our logic is not only about what can be computed. It is equally about how problems can be computed/solved. In other words, computability logic is a problem-solving tool. As such, it has expected applications far beyond the pure theory of computing. In this episode we will very briefly examine three potential application areas: • Knowledgebase systems • Systems for planning and action • Constructive applied theories

  3. 17.2 The reason for the failure of p⊔p as a computability-theoretic principle is that the problem represented by this formula may have no effective solution --- that is, the predicate p* may be undecidable, such as the predicate Halts. What is Dana’s gender? The reason why p⊔p fails as a knowledge-theoretic principle, however, is much simpler. Consider (1) Female(Dana) ⊔ Female(Dana) Sure! And a very simple one. Does this problem have an algorithmic solution? OK, then please solve it (win the game). You cannot, can you? That’s exactly the point. The problem has a solution, but the trouble is you do not know that solution. In other words, you do not know whether Dana is male or female. Then how about the following one? Dana=Mother(Tom)  Female(Dana) ⊔ Female(Dana) (2) This problem probably you can solve. Because you know that only females can be mothers. But what if you are unaware of the connection between motherhood and gender? If this sort of ignorance is hard for you to imagine, then try to solve (3) დანა=დედა(ტომ)  ქალი(დანა) ⊔ ქალი(დანა) Probably you cannot, even though (3) is the same as (2), only stated in Georgian.

  4. 17.3 (3) დანა=დედა(ტომ)  ქალი(დანა)⊔ქალი(დანა) In the machine’s shoes Now how about the following problem? x(y(x=დედა(y))ქალი(x))  დანა=დედა(ტომ)  ქალი(დანა)⊔ქალი(დანა) (4) This one you can solve. And not because I am revealing the secret that it means the same as the English x(y(x=Mother(y)) Female(x))  Dana=Mother(Tom)  Female(Dana)⊔Female(Dana) (5) But rather because you have been taking this course and know computability LOGIC. Looking at (3) and (4) will help you better appreciate how a machine (robot, software) “feels” when it receives a problem to solve. Just like you have no understanding of Georgian and have to exclusively rely on logic, so do machines, with the difference that they do not understand English either, nor do they understand that only females can be mothers unless told so, as done by the first conjunct of the antecedent of (5). To summarize, you (or a general-purpose logic-based machine) can solve (4) but not (3) because (4) is uniformly valid while (3) is not. Going back to p⊔p,the reason why it fails as a computability-theoretic principle is that it is not valid, while the reason why it fails as a knowledge-theoretic principle is that it is not uniformly valid. That is why in applications such as knowledgebase or planning systems we are exclusively concerned with uniform validity rather than validity.

  5. 17.4 So, no matter how intelligent you (or a computer system) are, without some special, “external” resources, you cannot really solve Female(Dana)⊔Female(Dana), or Pregnant(Dana) ⊔ Pregnant(Dana) --- the problem solved by disposable pregnancy test devices sold at pharmacies. Why the DNA test was invented Let us now look at the problem Pregnant(Dana)  Pregnant(Dana) Any system can “solve” this problem because it is an automatically won elementary game, and there is nothing to really solve at all. Solving this problem or any problem that is uniformly valid requires no special (non-logical) knowledge. On the other hand, the fact that Pregnant(Dana)⊔Pregnant(Dana)is not uniformly validsignifies that solving it does require some special ability and that the problem is not trivial at all. If it was, then the pregnancy test manufacturers would go bankrupt! Similarly, the problem is trivial and automatically “solvable”. On the other hand, the problem is not trivial at all (that is why the DNA test was invented after all!). Solving it does require some special, non-logical knowledge or ability. xy(y=Father(x)) ⊓x⊔y(y=Father(x))

  6. 17.5 A knowledgebase system is a computer system for knowledge management. It provides the means for collection, organization, and retrieval of knowledge. Knowledgebase systems There is no clear distinction between knowledgebase systems and database systems. Probably it is accurate to say that a knowledgebase system is a database system, but typically with some higher degree of “intelligence” than primitive database systems. That is, a knowledgebase is a database that usually has some artificial intelligence components. Expert systems are most common examples of knowledgebases heavily using AI. As advanced sorts of databases, knowledgebases absolutely have to allow complex logical expressions for storage and retrieval (queries) of knowledge and, ideally, be able to apply logic for nontrivial automated reasoning, deduction and problem-solving. Knowledgebases, in fact, are applied logic systems. And the important question is what logic should they be based upon. Typically classical logic is not sufficient, and some augmentations of it are sought. Most common is augmenting logic with epistemic constructs such as the modal “know that” operator. Epistemic logics, however, face many problems and appear to be far from satisfactory. On the following few slides we show the advantages of computability logic over the traditional approaches to knowledgebase logics, and its potential as a logic on which knowledgebases can be built.

  7. 17.6 A good logic for knowledgebase systems should be constructive[构造性的]. Constructiveness Such a logic should be able to differentiate between truth and actual ability to tell/find what is true. The non-constructive “knowledge” (informational resource) expressed by xy(y=Father(x)) is very different from (and hardly as relevant as) the constructive and nontrivial knowledge expressed by ⊓x⊔y(y=Father(x)), which implies potential knowledge of everyone’s actual father rather than the tautological knowledge that everyone has a father. Yet, classical logic fails to see or express this important difference. This makes it inadequate as a logic of knowledgebases. Computability logic promises to be adequate.

  8. 17.7 A good logic for knowledgebase systems should be interactive[交互的]. Interactivity Most of the actual knowledgebase and information systems are interactive. And computability logic, which is designed to be a logic of interactive tasks, is a well suited formal framework for them and an appealing alternative to the more traditional frameworks. Imagine a medical diagnostics system. What we would like the system to do is to tell us, for any patient x, the diagnosis y for x. Most likely, the query that the system solves would look like x(Q(x)⊔y(y=Diagnosis(x))), where Q(x) is a (counter)query with questions regarding x's symptoms, blood pressure, cholesterol level, reaction to various drugs, etc. And probably Q(x) would not be just a -conjunction of such questions, but rather it would have a more complex structure, where what questions are asked could depend on the answers that the user gave to previous questions, yielding a long dialogue with a series of interspersed moves by both parties. Or, remember your automated bank account information system. You dial the bank-by-phone number to inquire about your balance. But the query that the system solves is not really as simple as ⊔xMyBalance(x). If this was the case, then you would be told your balance right after dialing the number. Rather, you will have to go through quite a dialogue, with all sorts of questions regarding your preferences, account type and number, secret PIN or even mother's maiden name.

  9. 17.8 A good logic for knowledgebase systems should be resource-conscious. Resource-consciousness Possessing a disposable pregnancy test means having the following informational resource: ⊓x(Pregnant(x)⊔Pregnant(x)) (6) With this resource, you can tell if Dana (or any other woman) is pregnant. That is, the following conditional problem can be solved unconditionally by an agent that knows logic but otherwise has no additional physical or informational resources: ⊓x(Pregnant(x)⊔Pregnant(x))  Pregnant(Dana)⊔Pregnant(Dana) But can such an agent also solve the following problem? ⊓x(Pregnant(x)⊔Pregnant(x))  (Pregnant(Dana)⊔Pregnant(Dana)) (Pregnant(Jane)⊔Pregnant(Jane)) In other words, can an agent that has nothing but pure intellect (perfect knowledge of logic) plus a (one single) pregnancy test kit tell the pregnancy status of both Dana and Jane? Not really! The agent would need two rather than one pregnancy test kits for that. That is, it would need the resource ⊓x(Pregnant(x)⊔Pregnant(x))  ⊓x(Pregnant(x)⊔Pregnant(x)) (7) The traditional approaches to knowledgebases systems are unable to see the important and relevant difference between the resources (6) and (7),and that is too bad.

  10. 17.9 To get a better feel of computability logic as an interactive knowledgebase logic, consider A feel of computability logic as a query logic ⊓x(⊔sSmpt(x,s)  ⊓t(Pst(x,t)⊔Pst(x,t))  ⊔yHas(x,y)) (8) • Smpt(x,s) is the predicate “patient x has the set s of symptoms” • Pst(x,t) is the predicate “patient x tests positive for test t” • Has(x,y) is the predicate “patient x has disease y” where While still overly simplified, (8) is a more realistic problem for a real medical diagnostics system to be able to handle than the problem ⊓x⊔yHas(x,y), solving which would require the ability to diagnose an arbitrary patient without any additional information on the patient. Here is a possible scenario of “playing” this “game”: 1. At the beginning, the system is waiting for the user (“environment”) to specify a patient to be diagnosed. After the user enters the patient’s name X, the game is brought down to ⊔sSmpt(X,s)  ⊓t(Pst(X,t)⊔Pst(X,t))  ⊔yHas(X,y) 2. The system continues waiting until the user also enters X’s symptoms, say S: Smpt(X,S)  ⊓t(Pst(X,t)⊔Pst(X,t))  ⊔yHas(X,y) 3. Based on the information received from the user, the system selects a test T to perform it on X: Smpt(X,S)  (Pst(X,T)⊔Pst(X,T))  ⊔yHas(X,y) 4. The user (doctor) performs test T on the patient and reports that it is positive: Smpt(X,S)  Pst(X,T)  ⊔yHas(X,y) 5. The system reports back that patient X has disease Y: Smpt(X,S)  Pst(X,T)  Has(X,Y)

  11. 17.10 The scenario shown on the previous slide brought the interactive query A feel of computability logic as a query logic, continued ⊓x(⊔sSmpt(x,s)  ⊓t(Pst(x,t)⊔Pst(x,t))  ⊔yHas(x,y)) (8) down to the elementary game (proposition) Smpt(X,S)Pst(X,T)  Has(X,Y). The system wins as long as patient X indeed has disease Y, or the user lied or erred when reporting X’s symptoms and test results. The presence of a single copy of ⊓t(Pst(x,t)⊔Pst(x,t)) in the antecedent of (8) signifies that the system may request testing a given patient only once. If n tests were potentially required instead, this would be expressed by taking the -conjunction of n identical conjuncts: ⊓t(Pst(x,t)⊔Pst(x,t))  ⊓t(Pst(x,t)⊔Pst(x,t))  ...  ⊓t(Pst(x,t)⊔Pst(x,t)). And if the system potentially needed an unbounded number of tests, then we would write ⊓x(⊔sSmpt(x,s)  ⊓t(Pst(x,t)⊔Pst(x,t))  ⊔yHas(x,y)) thus further weakening (8): a system performing task (9) is not as good as the one performing (8), because it requires stronger external (user-provided) informational resources. (9) Replacing the main quantifier ⊓x by x, on the other hand, would strengthen (8), signifying the system's ability to diagnose a patent purely on the basis of his/her symptoms and test result without knowing who the patient really is. However, if in its diagnostic decisions the system uses some additional information on patients such their medical histories stored in its knowledgebase and hence needs to know the patient's identity, ⊓x cannot be upgraded to x. Replacing ⊓x by x would be a yet another way to strengthen (8), signifying the system's ability to diagnose all patients rather than any particular one; obviously effects of at least the same strength would be achieved by just prefixing (8) with or .

  12. 17.11 Formula (8) of the previous example is by no means uniformly valid. Only other hand, a logic-based system would and should be able to solve only uniformly valid problems. Thus, the logical problem that our imaginary system solves is not really (8). Rather, it is KB(8), where KB is all the additional non-logical knowledge and resources that the system possesses. It is this KB part that we call the knowledgebase of the knowledgebase system. Computability logic as a programming language Formally, such a KB is a finite (multi)set of formulas, which we identify with the -conjunction of its elements, so that KB can also be thought of as a one single (probably very long) formula. Do not confuse a knowledgebase with a knowledgebase system. The latter is just “pure”, logic-based problem-solving software of universal utility that initially comes to the user without any non-logical knowledge whatsoever. Indeed, built-in non-logical knowledge would make it no longer universally applicable: Dana can be a female in the world of one potential user while a male in the world of another user; and xy(xy=yx) can be false to a user who understands as set-theoretic rather than number-theoretic product. It is the user who selects and maintains KB for the system, putting into it all informational resources that (s)he believes are relevant, correct and maintainable. Think of the formalism of computability logic as a highly declarative programming language, and the process of creating KB as programming in it. Continuing in this direction, a deductive system for computability logic should be thought of as a compiler (or interpreter) for such a programming language. Among the appeals of such a language would be full absence of the program verification problem and, more importantly, removing the problem-solving burden from the human programmer, whose job would reduce to just stating problems rather than writing programs to solve them. But this, of course, is from the realm of fantasy rather than reality at this point.

  13. 17.12 A KB can include anything. For example: What could be in the knowledgebase • Atomic elementary formulas expressing factual knowledge, such as Female(Dana) or Dana=Mother(Tom) • Non-atomic elementary formulas expressing general knowledge, such as x(y(x=Mother(y))Female(x))orxy(x(y+1)=(xy)+x) • Non-classical formulas, such as ⊓x(Female(x) ⊔Male(x)) expressing potential knowledge of everyone’s gender, or ⊓x⊔y(y=x2) expressing the ability to repeatedly compute the square function, or something more complex and more interactive, such as ⊓x(⊔sSmpt(x,s)  ⊓t(Pst(x,t)⊔Pst(x,t))  ⊔yHas(x,y)) The most typical informational resources, such as factual knowledge or queries solved by computer programs, can be reused and therefore prefixed with a recurrence operator (we did not use recurrences for the above elementary formulas because elementary formulas are equivalent to their own recurrences). But exhaustible and limited informational resources such as ⊓x(Pregnant(x)⊔Pregnant(x)) provided by a disposable device may as well come without such prefixes.

  14. 17.13 With each resource RKB is associated (if not physically, at least conceptually) its provider --- an agent that solves the query R for the system, i.e. plays the game R against the system. Physically the provider could be: • a computer program allocated to the system, or • a network server having the system as a client, or • another knowledgebase system to which the system has querying access, • or even human personnel servicing the system. E.g., the provider for ⊓x⊔y(y=Bloodpressure(x)) would probably be a team of nurses repeatedly performing the task of measuring the blood pressure of a patient specified by the system and reporting the outcome back to the system. Resource providers We should not think of providers as a parts of the system itself. The latter only sees what resources are available to it, without knowing or caring about how the corresponding providers do their jobs; furthermore, the system does not even care whether the providers really do their jobs right. The system's responsibility is only to correctly solve queries for the user as long as none of the providers fail to do their job. Indeed, if the system misdiagnoses a patient because a nurse- provider gave it wrong information about that patient's blood pressure, the hospital is unlikely to fire the system and demand refund from its vendor; more likely, it would fire the nurse. Of course, when R is elementary, the provider has nothing to do, and its successfully playing R against the system simply means that R is true. Note that in the picture that we have just presented, the system plays each game RKB in the role of ⊥, so that, from the system's perspective, the game that it plays against the provider of R is R rather than R.

  15. 17.14 Assume KB=R1...Rn, and let us now try to visualize a system solving a problem F for the user. The designer would probably select an interface where the user only sees the moves made by the system in F, and hence gets the illusion that the system is just playing F. But in fact the game that the system is really playing is KBF, i.e. R1...RnF. The big game Indeed, the system is not only interacting with the user in F, but --- in parallel --- also with its resource providers against whom, as we already know, it plays R1,...,Rn. As long as those providers do not fail to do their jobs, the system loses each of the games R1,...,Rn. This means that the system wins its play over the “big game” R1...RnF if and only if it wins it in the F component, i.e. successfully solves F. Thus, the system's ability to solve a problem/query F reduces to its ability to generate a solution for KBF. What would give the system such an ability is built-in knowledge of computability logic --- in particular, a uniform-constructively sound axiomatization of it. According to the uniform-constructive soundness property (see Slide 15.14 or 16.8), it would be sufficient for the system to find a proof of KBF, which would allow it to (effectively) construct a machine M and then run it on KBF with guaranteed success. Notice that it is uniform-constructive soundness rather than simple soundness of the the built-in (axiomatization of the) logic that allows the knowledgebase system to function. Simple soundness just means that every provable formula is valid. This is not sufficient for two reasons.

  16. 17.15 One reason is that validity of a formula E only implies that, for every interpretation *, a solution for the problem E* exists. It may be the case, however, that different interpretations require different solutions, so that choosing the right solution requires knowledge of the actual interpretation, i.e. the meaning, of the atoms of E. Our assumption is that the system has no non-logical knowledge, which, in more precise terms, means nothing but that it has no knowledge of the interpretation *. Thus, a solution that the system generates for E* should be successful for any possible interpretation *. In other words, it should be a uniform solution for E. Why soundness should be uniform-constructive The other reason why simple soundness of the built-in logic would not be sufficient for a knowledgebase system to function --- even if every provable formula was known to be uniformly valid --- is the following. With simple soundness, after finding a proof of E, even though the system would know that a solution for E* exists, it might have no way to actually find such a solution. On the other hand, uniform-constructive soundness guarantees that a (uniform) solution for every provable formula not only exists, but can be effectively extracted from a proof.

  17. 17.16 As for the completeness of the built-in logic, unlike uniform-constructive soundness, it is a desirable but not necessary condition. How about completeness? So far a complete axiomatization has been found essentially only for the fragment of comutability logic limited to the language of CL4. We hope that the future will bring completeness results for more expressive fragments as well. But even if not, we can still certainly succeed in finding ever stronger axiomatizations that are uniform- constructively sound even if not necessarily complete. One of such axiomatizations is affine logic. It should be remembered that, when it comes to practical applications in the proper sense, the logic that will be used is likely to be far from complete anyway. For example, the popular classical-logic-based systems and programming languages are incomplete, and the reason is not that a complete axiomatization for classical logic is not known, but rather the unfortunate fact of life that often efficiency only comes at the expense of completeness. But even CL4, despite the absence of recurrence operators in it, is already very powerful. Why don't we see a simple example (on the next slide) to feel the taste of it as a query-solving logic.

  18. 17.17 Let Acid(x) mean “solution x contains acid”, and Red(x) mean “litmus paper turns red in solution x”. Assume the knowledgebase KB of a CL4-based knowledgebase system contains the following two formulas: The acidity query x(Red(x)Acid(x))and ⊓x(Red(x)⊔Red(x)). The left one accounts for the knowledge of the fact that a solution contains acid iff the litmus paper turns red in it. And the right formula accounts for availability of a provider who possesses a piece of litmus paper that it can dip into any solution and report the paper's color to the system. Then the system can solve the acidity query ⊓x(Acid(x)⊔Acid(x)). This follows from the fact that CL4 ⊦ KB  ⊓x(Acid(x)⊔Acid(x)), as shown on the next slide.

  19. 17.18 CL4 in work 1. x(Red(x)Acid(x))Red(y) Acid(y) from {} by Rule A 2. x(Red(x)Acid(x))Red(y) Acid(y)⊔Acid(y) from 1 by Rule B1 3. x(Red(x)Acid(x))Red(y) Acid(y) from {} by Rule A 4. x(Red(x)Acid(x))Red(y) Acid(y)⊔Acid(y) from 3 by Rule B1 5. x(Red(x)Acid(x))Red(y)⊔Red(y) Acid(y)⊔Acid(y) from {2,4} by Rule A 6. x(Red(x)Acid(x))⊓x(Red(x)⊔Red(x))  Acid(y)⊔Acid(y) from 5 by Rule B2 7. x(Red(x)Acid(x))⊓x(Red(x)⊔Red(x))  ⊓x(Acid(x)⊔Acid(x)) from {6} by Rule A

  20. 17.19 Now we outline how the context of knowledgebase systems can be further extended to systems for planning and action. From facts to tasks Roughly, the formal semantics in such applications remains the same, and what changes is only the underlying philosophical assumption that the truth values of predicates and propositions are fixed or predetermined. Rather, those values in computability-logic-based planning systems are viewed as something that interacting agents may be able to manage. That is, predicates or propositions there stand for tasks rather than facts. For example, Pregnant(Dana) --- or, perhaps, Impregnate(Dana) instead --- can be seen as having no predetermined truth value, with Dana or her husband being in control of whether to make it true or not. And the nonelementary formula ⊓xHit(x) describes the task of hitting any one target x selected by the environment/commander/user. Note how naturally resource- consciousness arises here: while ⊓xHit(x) is a task accomplishable with one ballistic missile, the stronger task ⊓xHit(x)⊓xHit(x) would require two missiles instead.

  21. 17.20 All of the other operators of computability logic, too, have natural interpretations as operations on not only informational but also physical tasks, with  acting as a task reduction operation. To get a feel of this, let us look at the task Fighting with monsters Give me a wooden stake⊓Give me a silver bulletDestroy the vampire⊓Kill the werewolf. This is a task accomplishable by an agent who has a mallet and a gun as well as sufficient time, energy and bravery to chase and eliminate any one (but not both) of the two monsters, and only needs a wooden stake and/or a silver bullet to complete his noble mission. Then the story told by the legal run 1.1, 0.1 of the above game is that the environment asked the agent to kill the werewolf, to which the agent replied by the counterrequest to give him a silver bullet. The task will be considered eventually accomplished by the agent (the game won) iff he indeed killed the werewolf as long as a silver bullet was indeed given to him.

  22. 17.21 A planning problem would usually have the form R  G, where R represents the available (physical or informational) resources and G is the goal task. Planning problems Finding a solution for such a problem would mean finding a uniform solution for R  G. In turn, finding a uniform solution for R  G would mean nothing but finding a proof of it in the underlying uniform-constructive axiomatization of the logic. As we already know, an actual solution for the problem (winning strategy) can then be automatically obtained from the proof. Below we consider a toy example of a planning problem and its solution using CL4. More serious applications, of course, would require more expressiveness that CL4 has, and probably even more expressiveness than the language of Episode 14 has. Adding sequential operators to that language would greatly improve the applicability of computability logic in systems for planning and action. There are several (sorts of) antifreeze coolants available to us, and our goal is to fill the radiator of the car with a coolant. Let the universe of discourse be the set of all coolants, and a,b be some two constants from that universe.

  23. 17.22 Goal: Fill the radiator with a safe sort of coolant Toy example of a planning problem x(Safe(x)Fill(x)) (0) And assume these are our informational and physical resources for achieving the goal: Resource/knowledge base: What we know: Coolant is safe iff it does not contain acid x(Safe(x)Acid(x)) (1) At least one of the coolantsa,bis safe Safe(a)  Safe(b) (2) ⊓x(Acid(x)⊔Acid(x)) What we have or can do: A piece of litmus paper (3) ⊓x(Fill(x)) Fill the radiator with any one coolant (4) This planning problem can be successfully solved without any additional/eternal informational or physical resources because (and only because) the formula is uniformly valid. The following slide shows a strategy. (1)  (2)  (3)  (4)  (0)

  24. 17.23 1. x(Safe(x)Acid(x))(Safe(a)Safe(b))⊓x(Acid(x)⊔Acid(x))⊓x(Fill(x)) x(Safe(x)Fill(x)) Solving the problem Use the litmus paper to find out whether coolant a contains acid (move 0.2.a). 2. x(Safe(x)Acid(x))(Safe(a)Safe(b))(Acid(a)⊔Acid(a))⊓x(Fill(x)) x(Safe(x)Fill(x)) Observe the result and, depending on it (move 0.2.0or0.2.1), go to step 3.a.1 or 3.b.1. 3.a.1. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)⊓x(Fill(x)) x(Safe(x)Fill(x)) Fill the radiator with b(move 0.3.b). 3.a.2. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)Fill(b) x(Safe(x)Fill(x)) Wash your hands. 3.b.1. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)⊓x(Fill(x)) x(Safe(x)Fill(x)) Fill the radiator with a (move 0.3.a) 3.b.2. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)Fill(a) x(Safe(x)Fill(x)) Wash your hands. As this example illustrates, the notorious frameproblem and knowledge preconditions problem remain out of the scene in computability-logic-based planning systems.

  25. 17.24 Had our direct and ad hoc attempt to find a solution failed (which would probably be the case if the example was more complex than it is), a radiator-filling strategy could have been effectively extracted from a CL4-proof of such as the proof given below. This is guaranteed by the uniform-constructive soundness of CL4. Planning and acting with CL4 x(Safe(x)Acid(x))(Safe(a)Safe(b))⊓x(Acid(x)⊔Acid(x))⊓x(Fill(x)) x(Safe(x)Fill(x)) 1. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)Fill(a) x(Safe(x)Fill(x)) from {} by Rule A 2. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)⊓x(Fill(x)) x(Safe(x)Fill(x)) from 1 by Rule B2 3. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)Fill(b)) x(Safe(x)Fill(x)) from {} by Rule A 4. x(Safe(x)Acid(x))(Safe(a)Safe(b))Acid(a)⊓x(Fill(x)) x(Safe(x)Fill(x)) from 3 by Rule B2 5. x(Safe(x)Acid(x))(Safe(a)Safe(b))(Acid(a)⊔Acid(a))⊓x(Fill(x)) x(Safe(x)Fill(x)) from {2,4} by Rule A 6. x(Safe(x)Acid(x))(Safe(a)Safe(b))⊓x(Acid(x)⊔Acid(x))⊓x(Fill(x)) x(Safe(x)Fill(x)) from 5 by Rule B2

  26. 17.25 The fact that computability logic is a conservative extension of classical logic also makes the former a reasonable and appealing alternative to the latter in its most traditional and unchallenged application areas. In particular, it makes perfect sense to base applied theories --- such as, say, Peano Arithmetic (formal number theory) PA --- on computability logic instead of classical logic. Applied theories based on computability logic Due to conservativeness, no old information would be lost or weakened this way. On the other hand, we would get by an order of magnitude more expressive, constructive and computationally meaningful theories than their classical-logic-based versions. One way to construct such a theory would be to have all formulas provable in an underlying uniform-condtructively sound axiomatization of computability logic (such as CL4, affine logic or intuitionistic logic) as logical axioms, and Modus Ponens plus perhaps the four other uniform-constructive closure rules from Slide 14.9 as logical rules of inference. These logical axioms and rules will be common for all computability- logic-based applied theories. In addition, each theory would have its own non-logical axioms (and maybe also non-logical rules, but we do not discuss this possibility here for the sake of simplicity).

  27. 17.26 From each non-logical axiom of a computability-logic-based theory T would be required to be (under the interpretation fixed for the theory) a computable problem and come with a fixed algorithmic solution. Is not this exactly what the constructivists have been calling for?! Then, the uniform-constructive soundness of the underlying logic and the uniform- constructive character of the rules would guarantee that every formula F provable in T is also a computable problem and that, furthermore, that an algorithmic solution for F can be automatically obtained from a proof of F in T. For example, while the provability of xyp(x,y) in classical-logic-based theories merely signifies that for every x a y exists such that p(x,y) is true, the provability of the constructive version ⊓x⊔yp(x,y)of that formula in a computability-logic-based theory would mean that, for every x, a y with p(x,y) not only exists, but can also be algorithmically found. Furthermore, an algorithm computing y from x, itself, can be constructed effectively based on the proof of ⊓x⊔yp(x,y). Computability-logic-based applied theories would thus be not only cognitive, but also problem-solving tools: In order to find a solution for a given problem, it would be sufficient to express the problem in the language of such a theory and then find a proof of it (on your own or using a theorem-prover). The rest --- specifically, constructing a solution for the problem or actually solving it --- would be automatically taken care of!!!

  28. 17.27 As an example, let us see what a computability-logic-based version of formal arithmetic [算术] might look like. Computability-logic-based arithmetic Language: All operators of the full language of Episode 14. No general letters. Only one constant: 0. Only four elementary letters: E,S,A,M. Interpretation:E(x,y) means “x=y”; S(x,y) means “x=y+1”; A(x,y,z) means “x=y+z”; M(x,y,z) means “x=yz”. Inference rules:       x ⊓x    Logical axioms: All formulas provable in CL4, affine logic or intuitionistic logic. P0: ⊓x⊔yS(y,x) Non-logical axioms: The -closures of the following formulas: P1: E(x,y)E(y,z)E(x,z) P5: A(x,x,0) P6: A(z,x,y)S(z’,z)S(y’,y)A(t,x,y’)E(t,z’) P2: E(x,y)S(x’,x)S(y’,y)E(x’,y’) P3: S(y,x)  E(0,y) P7: M(0,x,0) P4: S(x’,x) S(y’,y) E(x’,y’)E(x,y) P8: M(z,x,y)A(t,z,x)S(y’,y)M(u,x,y’)E(t,u) P9: (0) ⊓x⊓y(S(y,x)(x) (y))⊓x(x) (for any formula (x))

More Related