Optimisation of declarative programs lecture 1 introduction
Download
1 / 187

Optimisation of Declarative Programs Lecture 1: Introduction - PowerPoint PPT Presentation


  • 91 Views
  • Uploaded on

Optimisation of Declarative Programs Lecture 1: Introduction. Michael Leuschel Declarative Systems & Software Engineering Dept. Electronics & Computer Science University of Southampton http://. www.ecs.soton.ac.uk/~mal. + Morten Heine Sørensen DIKU, Copenhagen. 1. Overview

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Optimisation of Declarative Programs Lecture 1: Introduction' - jeremy-good


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Optimisation of declarative programs lecture 1 introduction

Optimisation of Declarative Programs Lecture 1: Introduction

Michael Leuschel

Declarative Systems & Software Engineering

Dept. Electronics & Computer Science

University of Southampton

http://

www.ecs.soton.ac.uk/~mal

+ Morten Heine Sørensen

DIKU, Copenhagen


Lecture 1

1. Overview

2. PE: 1st steps

3. Why PE

4. Issues in PE

Lecture 1

  • Explain what these lectures are about:

    • Program Optimisation in general

    • Partial Evaluation in particular

    • Program Transformation, Analysis, and Specialisation

  • Explain why you should attend the lectures:

    • Why are the topics interesting ?

    • Why are they useful ?

  • Overview of the whole course


Part 1 program optimisation and partial evaluation a first overview

1. Overview

2. PE: 1st steps

3. Why PE

4. Issues in PE

Part 1:Program Optimisation and Partial Evaluation: A first overview


Automatic program optimisation
(Automatic) Program Optimisation

  • When:

    • Source-to-source

    • during compilation

    • at link-time

    • at run-time

  • Why:

    • Make existing programs faster (10 %  500  ...)

    • Enable safer, high-level programming style ()

    • (Program verification,…)


Program optimisation ii
Program Optimisation II

  • What:

    • constant propagation, inlining , dead-code elimination, eliminate redundant computations, change of algorithm or data-structures, …

  • Similar to highly optimising compilers:

    • much more aggressive,

    • (much) greater speedups and

    • much more difficult to control

  • How ?


Program transformation unfold fold

unfold/fold

step

unfold/fold

step

Program Transformation: Unfold/fold

Final

Program

Pn

Initial

Program

P0

Program

P1

...

Under some conditions:

Same semantics

Same termination properties

Better performance


Program specialisation

Drawing

Program P

Program Specialisation

  • What:

    • Specialise a program for aparticular application domain

  • How:

    • Partial evaluation

    • Program transformation

    • Type specialisation

    • Slicing

P’


Overview
Overview

Partial

Evaluation

Program

Transformation

Program

Specialisation

ProgramOptimisation


Program analysis
Program Analysis

  • What:

    • Find out interesting properties of your programs:

      • Values produced, dependencies, usage, termination, ...

  • Why:

    • Verification, debugging

      • Type checking

      • Check assertions

    • Optimisation:

      • Guide compiler (compile time gc, …)

      • Guide source-to-source optimiser


Program analysis ii
Program Analysis II

  • How:

    • Rice’s theorem: all non-trivial properties are undecidable !

    • approximations or sufficient conditions

  • Techniques:

    • Ad hoc

    • Abstract Interpretation [Cousot’77]

    • Type Inference


Abstract interpretation

N-

0

N+

Abstract domain

Concrete domain

Abstract Interpretation

… -2 -101 2 3 ...

  • Principle:

    • Abstract domain

    • Abstract operations:

      • safe approximation of concrete operations

  • Result:

    • Correct

    • Termination can be guaranteed

    •  Decidable approximation of undecidable properties

N+ +N+ =N+

N+ +0=N+

...


Part 2 a closer look at partial evaluation first steps

1. Overview

2. PE: 1st steps

3. Why PE

4. Issues in PE

Part 2:A closer look at partial evaluation: First Steps


A closer look at pe
A closer look at PE

  • Partial Evaluation versus Full Evaluation

  • Why PE

  • Issues in PE: Correctness, Precision, Termination

  • Self-Application and Compilation


Full evaluation
Full Evaluation

  • Full input

  • Computes full output

function power(b,e) is

if e = 0 then

1

else

b*power(b,e-1)


Partial evaluation
Partial Evaluation

  • Only part of the input

  •  produce part of the output ?

    power(?,2)

  •  evaluate as much as you can

  •  produce a specialized program

    power_2(?)


Pe a first attempt
PE: A first attempt

function power(b,e) is

if e = 0 then

1

else

b*power(b,e-1)

  • Small (functional) language:

    • constants (N), variables (a,b,c,…)

    • arithmetic expressions *,/,+,-, =

    • if-then-else, function definitions

  • Basic Operations:

    • Evaluate arithmetic operation if all arguments are known

    • 2+3  5 5=4  false x+(2+3)  x+5

    • Replace if-then-else by then-part (resp. else-part) if test-part is known to be true (resp. false)


Example power
Example: power

  • power(?,2)

function power(b,2) is

if e = 0 then

1

else

b*power(b,e-1)

function power(b,2) is

if2= 0 then

1

else

b*power(b,e-1)

function power(b,2) is

iffalsethen

1

else

b*power(b,2-1)

Residual code:

function power_2(b) is

b*power_1(b)

  • power(?,1)

function power(b,1) is

if e = 0 then

1

else

b*power(b,e-1)

function power(b,1) is

iffalsethen

1

else

b*power(b,1-1)

function power_1(b) is

b*power_0(b)


Example power cont d
Example: power (cont’d)

  • power(?,0)

function power(b,0) is

if e = 0 then

1

else

b*power(b,e-1)

function power(b,0) is

if0=0 then

1

else

b*power(b,e-1)

function power(b,0) is

if0=0then

1

else

b*power(b,e-1)

Residual code:

function power_0(b) is

1


Example power cont d1
Example: power (cont’d)

  • Residual code:

function power_2(b) is

b*power_1(b)

function power_1(b) is

b*power_0(b)

function power_0(b) is

1

  • What we really, really want:

function power_2(b) is

b*b


Extra operation unfolding
Extra Operation: Unfolding

≈ evaluating the

function call operation

  • Replace call by definition

function power(b,2) is

if2= 0then

1

else

b*power(b,1)

function power(b,2) is

if2= 0then

1

else

b* (if1= 0then 1

elseb*power(b,0))

function power(b,2) is

if2= 0then

1

else

b*(if1= 0then 1

elseb* (if0 = 0then 1

else b*power(b,-1)))

Residual code:

function power_2(b) is

b*b*1

≈ SQR function


Pe not always beneficial
PE: Not always beneficial

  • power(3,?)

  • Residual code:

function power(3,e) is

if e = 0 then

1

else

3*power(3,e-1)

function power(3,e) is

if e = 0 then

1

else

b*power(b,e-1)

function power_3(e) is

if e = 0 then

1

else

3*power_3(e-1)


Part 3 why partial evaluation and program optimisation

1. Overview

2. PE: 1st steps

3. Why PE

4. Issues in PE

Part 3:Why Partial Evaluation (and Program Optimisation)?


Constant propagation
Constant Propagation

  • Static values in programs

function my_program(x) is

y := 2* power(x,3)

captures inlining and more

function my_program(x) is

y := 2* x * x * x


Abstract datatypes modules
Abstract Datatypes / Modules

  • Get rid of inherent overhead by PE

function my_program(x) is

if not empty_tree(x) then

y := get_value(x)…

+ get rid of redundant run-time tests

function my_program(x) is

if x != null then

if x != null then

y := x->val …


Higher order reusing generic code
Higher-Order/Reusing Generic Code

  • Get rid of overhead  incite usage

function my_program(x,y,z) is

r := reduce(*,1,map(inc,[x,y,z]))

function my_program(x,y,z) is

r := (x+1)*(y+1)*(z+1)


Higher order ii
Higher-Order II

  • Can generate new functions/procedures

function my_program(x) is

r := reduce(*,1,map(inc,x))

function my_program(x) is

r := red_map_1(x)

function red_map_1(x) is

if x=nil then return 1

else return x->head* red_map_1(x->tail)


Staged input
Staged Input

  • Input does not arrive all at once

5

2

2.1

my_program (x , y , z)

42


Examples of staged input
Examples of staged input

  • Ray tracingcalculate_view(Scene,Lights,Viewpoint)Interpretationinterpreter(ObjectProgram,Call) prove_theorem(FOL-Theory,Theorem) check_integrity(Db_rules,Update) schedule_crews(Rules,Facts)

  • Speedups

    • 2: you get your money back

    • 10: quite typical for interpretation overhead

    • 100-500 (and even ∞): possible


Ray tracing
Ray Tracing

Static


Why go beyond partial evaluation

Improve algorithms (not necessarily PS):

Tupling

Deforestation

Superlinear speedups

Exponential  linear algorithm

Non-executable  executable programs

Software Verification, Inversion, Model Checking

Why go beyond partial evaluation ?


Tupling
Tupling

Corresponds to

loop-fusion



Part 4 issues in partial evaluation and program optimisation

1. Overview

2. PE: 1st steps

3. Why PE

4. Issues in PE

Part 4:Issues in Partial Evaluation and Program Optimisation

Efficiency/Precision

Correctness

Termination

Self-application


Correctness
Correctness

  • Language

    • Power/Elegance: unfolding LP easy, FP ok, C++ aargh

    • Modularity: global variables, pointers

  • Semantics

    • Purely logical

    • Somewhat operational (termination,…)

    • Purely operational

    • Informal/compiler

admissive

restrictive


Programming language for course
Programming Language for Course

  • Lectures use mainly LP and FP

    • Don’t distract from essential issues

    • Nice programming paradigm

    • A lot of room for speedups

    • Techniques can be useful for other languages, but

      • Correctness will be more difficult to establish

      • Extra analyses might be required (aliasing, flow analysis, …)


Efficiency precision

x := 2+y;p(x)

y=5 p(7)

Efficiency/Precision

  • Unfold enough but not too much

    • Enough to propagate partial information

    • But still ensure termination

    • Binding-time Analysis (for offline systems):

      • Which expressions will be definitely known

      • Which statements can be executed

  • Allow enough polyvariance but not too much

    • Code explosion problem

    • Characteristic trees

p(7) p(9) p(11) p(13) ...


Termination
Termination

  • Who cares

  • PE should terminate when the program does

  • PE should always terminate

  • " within reasonable time bounds

State of the art


Self application
Self-Application

  • Principle:

    • Write a partial evaluator (program specialiser,…) which can specialise itself

  • Why:

    • Ensures non-trivial partial evaluator

    • Ensures non-trivial language

    • Optimise specialisation process

    • Automatic compiler generation !


Compiling by pe
Compiling by PE

Call2

Call1

Call3

Result1

Object Program P

Interpreter

Result2

Result3

Call2

Call1

Call3

PE

Result1

P-Interpreter

Result2

Result3


Compiler generation by pe
Compiler Generation by PE

Object Program R

static

Object Program Q

Object Program R

Object Program P

Object Program Q

Object Program P

Interpreter I

PE

PE’

I-PE

= Compiler !

Useful for:

Lge Extensions,

Debugging,

DSL,...

P-Interpreter

Q-Interpreter

R-Interpreter


Cogen generation by pe
Cogen Generation by PE

Interpreter R

static

Interpreter Q

Interpreter R

Interpreter P

Interpreter Q

Interpreter P

PE

PE’

PE’’

PE-PE

= Compiler

Generator !

3rd Futamura

Projection

P-Compiler

Q-Compiler

R-Compiler


Part 5 overview of the course remainder
Part 5:Overview of the course (remainder)


Overview of the course lecture 2
Overview of the Course: Lecture 2

  • Partial Evaluation of Logic Programs:First Steps

    • Topics

      • A short (re-)introduction to Logic Programming

      • How to do partial evaluation of logic programs(called partial deduction): first steps

    • Outcome:

      • Enough knowledge of LP to follow the rest

      • Concrete idea of PE for one particular language


Overview of the course lecture 3
Overview of the Course: Lecture 3

  • (Online) Partial Deduction: Foundations, Algorithms and Experiments

    • Topics

      • Correctness criterion and results

      • Control Issues: Local vs. Global; Online vs. Offline

      • Local control solutions: Termination

      • Global control: Ch. trees, WQO's at the global level

      • Relevance for other languages and applications

      • Full demo of Ecce system

    • Outcome:

      • How to control program specialisers (analysers, ...)

      • How to prove them correct


Overview of the course lecture 4
Overview of the Course: Lecture 4(?)

  • Extending Partial Deduction:

    • Topics

      • Unfold/Fold vs PD

      • Going from PD to Conjunctive PD

      • Control issues, Demo of Ecce

      • Abstract Interpretation vs Conj. PD (and Unfold/Fold)

      • Integrating Abstract Interpretation

      • Applications (+ Demo): Infinite Model Checking

    • Outcome

      • How to do very advanced/fancy optimisations


Overview of the course other lectures
Overview of the Course:Other Lectures

  • By Morten Heine Sørensen

  • Context:

    • Functional Programming Languages

  • Topics:

    • Supercompilation

    • Unfold/fold

    • ...


Summary of lecture 1 what to know for the exam
Summary of Lecture 1:What to know for the exam

  • Terminology:

    • Program Optimisation, Transformation, Analysis, Specialisation, PE

  • Basics of PE and Optimisation in general

  • Uses of PE and Program Optimisation

    • Common compiler optimisations

    • Enabling high-level programming

    • Staged input, optimising existing code

    • Self-application and compiler generation


Optimisation of declarative programs lecture 2

Optimisation of Declarative Programs Lecture 2

Michael Leuschel

Declarative Systems & Software Engineering

Dept. Electronics & Computer Science

University of Southampton

http://

www.ecs.soton.ac.uk/~mal


Overview of lecture 2

1. LP Refresher

2. PE of LP: 1st steps

Overview of Lecture 2:

  • Recap of LP

  • First steps in PE of LP

    • Why is PE non-trivial ?

    • Generating Code: Resultants

    • Correctness

      • Independence, Non-triviality, Closedness

      • Results

    • Filtering

    • Small demo of the ecce system


Part 1 brief refresher on logic programming and prolog

1. LP Refresher

2. PE of LP: 1st steps

Part 1:Brief Refresher onLogic Programming and Prolog


Brief history of logic programming
Brief History of Logic Programming

  • Invented in 1972

    • Colmerauer (implementation, Prolog = PROgrammation en LOGique)

    • Kowalski (theory)

  • Insight: subset of First-Order Logic

    • efficient, procedural interpretation based on resolution (Robinson 1965 + Herbrand 1931)

    • Program = theory

    • Computation = logical inference


Features of logic programming
Features of Logic Programming

  • Declarative Language

    • State what is to be computed, not how

  • Uniform Language to express and reason about

    • Programs, specifications

    • Databases, queries,…

  • Simple, clear semantics

    • Reasoning about correctness is possible

    • Enable powerful optimisations


Logical foundation sld resolution
Logical Foundation: SLD-Resolution

  • Selection-rule driven Linear Resolution for Definite Clauses

  • Selection-rule

    • In a denial: selects a literal to be resolved

  • Linear Resolution

    • Derive new denial, forget old denial

  • Definite Clauses:

    • Limited to (Definite) Program Clauses

    • (Normal clause: negation allowed in body)


2. Higher-Order Logic

R. R(tom)  human(tom)

variables over objects and relations

1. First-Order Logic

- constants, functions, variables

(tom mother(tom) X)

- relations

( human(.) married(. , .) )

- quantifiers

(  )

human(sokrates)

X. human(X)  mortal(X)

variables over objects

0. Propositional Logic

- propositions

(basic units: either true or false)

- logical connectives

(   )

rains

rains  carry_umbrella

p  p

rains  rains

no variables


FO Resolution Principle: Example

1. select literals

1.)Xumbrella(X) rains(X)

2.) rains(london)

1’.) umbrella(london) rains(london)

2’.) rains(london)

1’+2’) umbrella(london)

2. no renaming required

3. mgu  = {X / london}

4. apply 

5. resolution


Terminology
Terminology

  • SLD-Derivation for P  { G}

    • Sequence of goals and mgu’s, at every step:

      • Apply selectionrule on current goal (very first goal: G)

      • Unify selected literal with the head of a program clause

      • Derive the next goal by resolution

  • SLD-Refutation for P  { G}

    • SLD-Derivation ending in 

  • Computed answer for P  { G}

    • Composition of mgu’s of a SLD-refutation, restricted to variables in G


Sld derivation

1) knows_logic(X) good_student(X)  teacher(Y,X)  logician(Y)

2) good_student(tom)

3) logician(peter)

4) teacher(peter,tom)

 knows_logic(Z)

 good_student(X)teacher(Y,X) logician(Y)

teacher(Y,tom) logician(Y)

 logician(peter)

SLD­Derivation

G

{Z/X}

{X/tom}

{Y/peter}

Computed answer = {Z/tom}

PJknows_logic(tom) ??

{}


Backtracking

1) knows_logic(X)good_student(X) teacher(Y,X) logician(Y)

2) good_student(jane)

3) good_student(tom)

4) logician(peter)

5) teacher(peter,tom)

 knows_logic(Z)

 good_student(X)teacher(Y,X) logician(Y)

 teacher(Y,jane)logician(Y)

 teacher(Y,tom) logician(Y)

 logician(peter)

Backtracking

Backtracking required !

Try to resolve with another clause


Sld trees

knows_logic(Z)

 good_student(X)teacher(Y,X) logician(Y)

 teacher(Y,tom)logician(Y)

 logician(peter)

 teacher(Y,jane) logician(Y)

fail

SLD-Trees

Prolog: explores this tree Depth-First, always selects leftmost literal


Sld trees ii

Branches of SLD-tree = SLD-derivations

Types of branches:

refutation (successful)

finitely failed

infinite

app ([],[a],[a])

app ([],[a],[b])

fail

loop

loop

...

SLD-Trees II


Some info about prolog

if subgoal succeeds, move right

query

fails

query

succeeds

if subgoal fails, backtrack to the left

Some info about Prolog

  • Depth-first traversal of SLD-tree

  • LD-resolution

    • Always select leftmost literal in a denial


Prolog s syntax
Prolog’s Syntax

  • Facts

    human(socrates).

  • Rules (clauses)

    carry_umbrella :- rains,no_hat.

  • Variables

    mortal(X) :- human(X).

    exists_human :- human(Any).

  • Queries

    ?- mortal(Z).

 (implication)

 (conjunction)

existential variable ()

universal variable ()


Prolog s datastructures
Prolog’s Datastructures

lower-case

  • Constants:

    human(socrates).

  • Integers, Reals:

    fib(0,1).

    pi(3.141).

  • (Compound) Terms:

    head_of_list(cons(X,T),X).

    tail_of_list(cons(X,T),T).

    ?- head_of_list(cons(1,cons(2,nil)),R).

functor

Prolog actually uses . (dot) for lists


Lists in prolog
Lists in Prolog

  • Empty list:

    • [] (syntactic sugar for nil )

  • Non-Empty list with first element H and tail T:

    • [H|T] (syntactic sugar for .(H,T) )

  • Fixed-length list:

    • [a] (syntactic sugar for [a | [] ]= .(a,nil) )

    • [a,b,c] (syntactic sugar for [a | [b | [c | [] ] ] ] )


Part 2 partial evaluation of logic programs first steps

1. LP Refresher

2. PE of LP: 1st steps

Part 2:Partial Evaluation of Logic Programs:First Steps


What is full evaluation in lp

In our context: full evaluation =constructing complete SLD-tree for a goal

 every branch either

successful, failed or infinite

SLD can handle variables !!

 can handle partial data-structures

(not the case for imperative or functional programming)

 Unfolding = ordinary SLD-resolution !

 Partial evaluation = full evaluation ??

 Partial evaluation trivial ??

What is Full Evaluation in LP


Example

Apply c.a.s. oninitial goal

(+ filter out static part)

Example

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app ([a],B,C)

{C/[a|C’]}

app ([],B,C’)

{C’/B}

app([a],B,[a|B]).

app_a(B,[a|B]).


Example runtime query
Example:RuntimeQuery

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app ([a],[b],C)

{C/[a|C’]}

app ([],[b],C’)

app ([a],[b],C)

app_a ([b],C)

{C’/[b]}

{C/[a,b]}

{C/[a,b]}

app([a],B,[a|B]).

app_a(B,[a|B]).

c.a.s.: {C/[a,b]}


Example ii

Apply c.a.s. oninitial goal

(+ filter out static part)

Example II

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app (A,B, [a])

{A/[a|A’]}

{A/[],B/[a]}

app (A’,B,[])

{A’/[]}

app([],[a],[a]).

app([a],[],[a]).

app_a([],[a]).

app_a([a],[]).


Example iii

BUT: complete SLD-tree usually infinite !!

Example III

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app (A,[a],B)

{A/[H|A’],B/[H|B’]}

{A/[],B/[a]}

app (A’,[a],B’)

{A’/[H|A’’],B’/[H|B’’]}

{A’/[],B’/[a]}

app (A’’,[a],B’’)

...


Partial deduction

Basic Principle:

Instead of building one complete SLD-tree:Build a finite number of finite “SLD- trees” !

SLD-trees can be incomplete

4 types of derivations in SLD-trees:

Successful, failed, infinite

Incomplete: no literal selected

Partial Deduction


Example revisited

app (A,[a],B)

{A/[H|A’],B/[H|B’]}

{A/[],B/[a]}

app (A’,[a],B’)

STOP

Example (revisited)

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app_a([],[a]).

app_a([H|A’],[H|B’]) :- app_a(A’,B’).


Example revisited1

app (A,[a],B)

app (A,[a],B)

app (A,[a],B)

app (A,[a],B)

app (A’,[a],B’)

app (A’,[a],B’)

app (A’,[a],B’)

app (A’,[a],B’)

Example (revisited)

app (A,[a],B)

app (A’,[a],B’)

Infinite complete tree

is “covered”

app (A’’,[a],B’’)

app (A’’’,[a],B’’’)

...


Main issues in pd pe in general

How to construct the specialised code ?

When to stop unfolding (building SLD-tree) ?

Construct SLD-trees for which goals ?

When is the result correct ?

 Will be addressed in this Lecture !

Main issues in PD (& PE in general)


Generating code resultants
Generating Code:Resultants

G0

  •  A1,…,Ai,…,An

  •  G1

  • G2

  • ...

  • Gk

  • Resultant ofSLD-derivation

  • is the formula:

    G01...k Gk

    if n=1: Horn clause !

0

1

k


Resultants example

app (A,[a],B)

{A/[H|A’],B/[H|B’]}

{A/[],B/[a]}

 app (A’,[a],B’)

STOP

Resultants: Example

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app([],[a],[a]).

app([H|A’],[a],[H|B’]) :- app(A’,[a],B’).


Formalisation of partial deduction
Formalisation of Partial Deduction

  • Given a set S = {A1,…,An} of atoms:

  • Build finite, incomplete SLD-trees for each  Ai

  • For every non-failing branch:

    • generate 1 specialised clause bycomputing the resultants

  • When is this correct ?


Correctness 1 non trivial trees
Correctness 1: Non-trivial trees

  • Trivial tree:

  • Resultant:app (A,[a],B)  app (A,[a],B)

  • loops!

  • Correctness condition:Perform at least one derivation step

 app (A,[a],B)


Correctness 2 closedness

app ([a,b],[],C)

Stop

 app ([b],[],C)

fail

Correctness 2: Closedness

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

 app ([a|A],B,C)

{C/[a|C’]}

 app (A,B,C’)

app([a|A],B,[a|C’]) :- app(A,B,C’).

Not an instance of

app ([a|A],B,C)

Not: C/[a,b] !!


Closedness condition
Closedness Condition

To avoid uncovered calls at runtime:

  • All predicate calls

    • in the bodies of specialised clauses(= atoms in leaves of SLD-trees !)

    • in the calls to the specialised program

  • must be instances of at least one of the atoms in S = {A1,…,An}


Correctness 3 independence

app (A,B,C)

{A/[H|A’],C/[H|C’]}

{A/[],B/C}

 app (A’,B,C’)

Correctness 3: Independence

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

 app ([a|A],B,C)

{C/[a|C’]}

 app (A,B,C’)

app([a|A],B,[a|C’]) :- app(A,B,C’).

app([],C,C).

app([H|A’],B,[H|C’]) :- app(A’,B,C’).

Closedness:

Ok


Extra answers
Extra answers

app([a|A],B,[a|C’]) :- app(A,B,C’).

app([],C,C).

app([H|A’],B,[H|C’]) :- app(A’,B,C’).

 app ([X],Y,Z)

{X/a,Z/[a|Z’]}

{Z/[X|Z’]}

 app ([],Y,Z’)

 app ([],Y,Z’)

Extra answer !

+

more instantiated !!

X = a, Z=[a|Y]

Z=[X|Y]


Independence condition
Independence Condition

To avoid extra and/or more instantiated answers:

  • require that no two atoms in S = {A1,…,An} have a common instance

  • S = {app ([a|A],B,C), app(A,B,C)}

    • common instances = app([a],X,Y), app([a,b],[],X), …

  • How to ensure independence ?

    • Just Renaming !


Renaming no extra answers

{X/a,Z/[a|Z’]}

 app ([],Y,Z’)

X = a, Z=[a|Y]

Renaming: No Extra answers

app_a([a|A],B,[a|C’]) :- app(A,B,C’).

app([],C,C).

app([H|A’],B,[H|C’]) :- app(A’,B,C’).

app([a|A],B,[a|C’]) :- app(A,B,C’).

app([],C,C).

app([H|A’],B,[H|C’]) :- app(A’,B,C’).

 app ([X],Y,Z)

{Z/[X|Z’]}

 app ([],Y,Z’)

No extra answer !

Z=[X|Y]


Soundness completeness
Soundness & Completeness

P’ obtained by partial deduction from P

If non-triviality, closedness, independence conditions are satisfied:

  • P’  {G} has an SLD-refutation with c.a.s.  iff P  {G}has

  • P’ {G} has a finitely failed SLD-tree iffP  {G}has

    Lloyd, Shepherdson: J. Logic Progr. 1991


A more detailed example

{L/[],R/[]}

{L/[H|L’],R/[PH|R’]}

 C=..[inv,H,PH],

call(C),map(inv,L’,R’)

{C/inv(H,PH)}

 call(inv(H,PH)),map(inv,L’,R’)

inv(H,PH),map(inv,L’,R’)

{H/0,PH/1}

{H/1,PH/0}

 map(inv,L’,R’)

 map(inv,L’,R’)

A more detailed example

 map (inv,L,R)

map(P,[],[]).

map(P,[H|T],[PH|PT]) :-

C=..[P,H,PH],

call(C),map(P,T,PT).

inv(0,1).

inv(1,0).

map(P,[],[]).

map(P,[H|T],[PH|PT]) :-

C=..[P,H,PH],

call(C),map(P,T,PT).

inv(0,1).

inv(1,0).

Overhead removed: 2 faster

map(inv,[],[]).

map(inv,[0|L’],[1|R’]) :-

map(inv,L’,R’).

map(inv,[1|L’],[0|R’]) :-

map(inv,L’,R’).


Filtering

map_1(L,R)

{L/[],R/[]}

{L/[H|L’],R/[H|R’]}

 C=..[inv,H,PH],

call(C),map(inv,L’,R’)

{C/inv(H,PH)}

 call(inv(H,PH)),map(inv,L’,R’)

inv(H,PH),map(inv,L’,R’)

{H/0,PH/1}

{H/1,PH/0}

 map(inv,L’,R’)

 map(inv,L’,R’)

Filtering

 map (inv,L,R)

map(P,[],[]).

map(P,[H|T],[PH|PT]) :-

C=..[P,H,PH],

call(C),map(P,T,PT).

inv(0,1).

inv(1,0).

map_1([],[]).

map_1([0|L’],[1|R’]) :-

map_1(L’,R’).

map_1([1|L’],[0|R’]) :-

map_1(L’,R’).

map(inv,[],[]).

map(inv,[0|L’],[1|R’]) :-

map(inv,L’,R’).

map(inv,[1|L’],[0|R’]) :-

map(inv,L’,R’).


Renaming filtering correctness
Renaming+Filtering: Correctness

  • Correctness results carry over

    • Benkerimi,Hill J. Logic & Comp 3(5), 1993

  • No worry about Independence

  • Non-triviality also easy to ensure

  •  only worry that remains (for correctness):Closedness


Small demo ecce system

Append

Higher-Order

Match

Interpreter

Small Demo (ecce system)


Summary of part 2
Summary of part 2

  • Partial evaluation in LP

  • Generating Code:

    • Resultants, Renaming, Filtering

  • Correctness conditions:

    • non-triviality

    • closedness

    • independence

  • Soundness & Completeness


Optimisation of declarative programs lecture 3

Optimisation of Declarative Programs Lecture 3

Michael Leuschel

Declarative Systems & Software Engineering

Dept. Electronics & Computer Science

University of Southampton

http://

www.ecs.soton.ac.uk/~mal


Part 1 controlling partial deduction local control termination

1. Local Control &

Termination

2. Global Control &

Abstraction

Part 1:Controlling Partial Deduction:Local Control & Termination


Issues in control

A

A1 A2 A3 A4 ...

Issues in Control

  • Correctness

    • ensure closedness: add uncovered leaves to set A

  • Termination

    • build finite SLD-trees

    • + build only a finite number of them !

  • Precision

    • unfold sufficiently to propagate information

    • have a precise enough set A

global

local

CONFLICTS!


Control local vs global

A

A1 A2 A3 A4 ...

Control:Local vs Global

  • Local Control:

    • Decide upon the individual SLD-trees

    • Influences the code generated for each Ai

       unfolding rule

      • given a program P and goal G returns finite, possibly incomplete SLD-tree

  • Global control

    • decide which atoms are in A

       abstraction operator


Generic algorithm
Generic Algorithm

Input: program P and goal G

Output: specialised program P’

Initialise: i=0, A0 = atoms in G

repeat

Ai+1 := Ai

for each aAido

Ai+1 := Ai+1 leaves(unfold(a))

end for

Ai+1 := abstract(Ai+1)

untilAi+1 = Ai

compute P’ via resultants+renaming


Overview lecture 3
Overview Lecture 3

  • Local Control

    • Determinacy

    • Ensuring Termination

      • Wfo’s, Wqo’s

  • Global Control

    • abstraction

    • most specific generalisation

    • characteristic trees


Local termination
(Local) Termination

  • Who cares

  • PE should terminate when the program does

  • PE should always terminate

  • " within reasonable time bounds

State of the art

Well-founded orders

Well-quasi orders


Determinate unfolding
Determinate Unfolding

  • Unfold atoms which match only 1 clause(only exception: first unfolding step)

    • avoids code explosion

    • no duplication of work

    • good heuristic for pre-computing

    • too conservative on its own

    • termination

      • always ?  No

      • when program terminates ??


Duplication

app(A,B,C),app (_I,C,[a])

app(A,B,[a,a])

app(A,B,[a])

app(A,B,C),app (_I,C,[])

app(A,B,[])

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

Duplication

app(A,B,C),app (_I,C,[a,a])

a2(A,B,[a,a]) :- app(A,B,[a,a]).

a2(A,B,[a]) :- app(A,B,[a]).

a2(A,B,[]) :- app(A,B,[]).


Duplication 2

app [a],[],[a])

app [a],[],[])

app([a],[],[a,a])

app([],[],[a])

fail

a2(A,B,[a,a]) :- app(A,B,[a,a]).

a2(A,B,[a]) :- app(A,B,[a]).

a2(A,B,[]) :- app(A,B,[]).

Duplication 2

a2([a],[],C)

  • Solutions:

    • Determinacy!

    • Follow runtime selection rule


Order of solutions

p(A,b)

p(A,a)

t(A,B) :- p(A,C),p(C,B).

p(a,b).

p(b,a).

Order of Solutions

  • Same solutions:

    • Determinacy!

    • Follow runtime selection rule

t(A,C)

p(A,B),p(B,C)

t(b,b).

t(a,a).


Lookahead
Lookahead

  • Also unfold if

    atom matches more than 1 clause

    + only 1 clause remains after further unfolding

    • less conservative

    • still ensures no duplication, explosion

fail


Lookahead example

fail

 sel(4,[],R’)

{R/[]}

 s4([5],R)

Lookahead example

 sel (4,[5],R)

sel(P,[],[]).

sel(P,[H|T],[H|PT]) :-

P<H, sel(P,T,PT).

sel(P,[H|T],PT) :-

P>=H, sel(P,T,PT).

s4(L,PL) :- sel(4,L,PL).

{}

{R/[5|R’]}

 4>=5, sel(4,[],R)

 4<5, sel(4,[],R’)

s4([5],[5]).


Example1

{A/[H|A’]}

{A’/[H|A’’]}

{A’’/[H|A’’’]}

app (A’,[a],A’)

app (A’’,[a],A’’)

app (A’’’,[a],A’’’)

Unificationfails

Unificationfails

Unificationfails

Example

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app (A,[a],A)

At runtime:

app([],[a],[])

terminates !!!


Well founded orders

Mapping to

Well-founded orders

  • < (strict) partial order (transitive, anti-reflexive, anti-symmetric) with no descending chainss1 > s2 > s3 >…

  • To ensure termination:

    • define < on expressions/goals

    • unfold only if sk+1 < sk

  • Example: termsize norm (number of function and constant symbols)


Example2

{B/[a|A’]}

{B’/[b|A’’]}

app ([b|C],A,B’)

app (C,A,B’’)

Unificationfails

Unificationfails

{C/[H|C’],B’’/[H|A’’’]}

app (C’,A,B’’’)

Example

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

app ([a,b|C],A,B)

|.| = 5

Ok

|.| = 3

Ok

|.| = 1

Stop

|.| = 1

|.| = 0


Other wfo s
Other wfo’s

  • linear norms, semi-linear norms

    • |f(t1,…,tn)| = cf + cf,1 |t1| + … + cf,n|tn|

    • |X| = cV

  • lexicographic ordering

    • |.|1, …, |.|k : first use |.|1, if equal use |.|2...

  • multiset ordering

    • {t1,…,tn} < {s1,…,sk, t2,…,tn} if all si < t1

  • recursive path ordering

  • ...


More general than instance of

A>B if A strictly more general than B

p(X)

p(s(X))

p(s(s(X)))

not a wfo

A>B if A strict instance of B

p(s(s(X)))

p(s(X))

p(X)

X

is a wfo (Huet’80)

More general-than/Instance-of


Refining wfo s

rev ([b|C],[a],R)

rev (C,[b,a],R)

Unificationfails

Unificationfails

{C/[H|C’]}

rev (C’,[H,b,a],R)

Refining wfo’s

rev([],L,L).

rev([H|X],A,R) :- rev(X,[H|A],R).

|.| = 4

rev ([a,b|C],[],R)

|.| = 6

 measure just arg 1

|.| = 2

|.| = 6

Ok

|.| = 0

Stop

|.| = 0


Well quasi orders
Well-quasi orders

  • quasi order (reflexive and transitive) withevery infinite sequence s1,s2,s3,… we can findi<j such that si sj

  • To ensure termination:

    • define  on expressions/goals

    • unfold only if for no i<k+1: si sk+1


Alternative definitions of wqo s
Alternative Definitions of wqo’s

  • All quasi orders ’ which contain  (ab  a’b) are wfo’s

    • can be used to generate wfo’s

  • Every set has a finite generating set (ab  a generates b)

  • For every infinite sequence s1,s2,… we can find an infinite subsequence r1,r2,… such that ri ri+1

  • Wfo with finitely many incomparable elements

  • ...


Examples of wqo s

AB if A and B have same top-level functor

p(s(X))

q(s(s(X)))

r(s(s(s(X))))

p(s(s(s(s(X)))))

...

Is a wqo for a finite alphabet

AB if not A < B, where > is a wfo

p(s(s(X)))

r(s((X))

q(X))

p(s(s(s(s(X)))))

Is a wqo (always) with same power as >

Examples of wqo’s

Termsize

not

decreasing


Instance of relation
Instance-of Relation

  • A>B if A strict instance of B is a wfo

  • AB if A instance of B: wqo ??

    • p(s(X))

    • p(s(s(X)))

    • ...

    • p(s(X))

    • p(s(X))

p(0)

p(s(0))

p(s(s(0)))

p(s(s(s(0))))

p(s(s(s(s(0)))))

...

 Every wqo is a wfo but not the other way around


Comparison wfo s and wqo s

No  descending chains s1 > s2 > s3 >…

Ensure that sk+1 < sk

More efficient(only previous element)

Can be used statically

(admissible sequences can be composed)

Every  sequence:si sj with i<j

Ensure that not si sk+1

uncomparable elements

More powerful (SAS’98)

 more powerful than all monotonic and simplification orderings

Comparison: WFO’s and WQO’s


Homeomorphic embedding

Homeomorphic Embedding 

  • Diving: s  f(t1,…,tn) if i: s  ti

  • Coupling: f(s1,…,sn)  f(t1,…,tn) if i: si ti

n≥0

f(a)

g(f(f(a)),a)

g

f

f

a

a

f

a


Admissible transitions

rev([a,b|T],[],R)

solve(rev([a,b|T],[],R),0)

t(solve(rev([a,b|T],[],R),0),[])

path(a,b,[])

path(b,a,[])

t(solve(path(a,b,[]),0),[])

rev([a|T],[a],R)

solve(rev([a|T],[a],R),s(0))

t(solve(rev([a|T],[a],R),s(0)),[rev]))

path(b,a,[a])

path(a,b,[b])

t(solve(path(b,a,[a]),s(0)),[path]))

Admissible transitions


Higman kruskal theorem 1952 60
Higman-Kruskal Theorem (1952/60)

  •  is a WQO (over a finite alphabet)

  • Infinite alphabets + associative operators: f(s1,…,sn)  g(t1,…,tm) if f  g andi: si tji with 1j1<j2<…<jnm and(q,p(b))  and(r,q,p(f(b)),s)

  • Variables : X  Y more refined solutions possible (DSSE-TR-98-11)


An example for

rev([a,b|X],[])

rev([b|X],[b])

rev(X,[b,a])

rev(Y,[H,b,a])

eval(rev([a,b|X],[]),0)

eval(rev([b|X],[b]),s(0))

eval(rev(X,[b,a]),s(s(0)))

eval(rev(Y,[H,b,a]),s(s(s(0))))

An Example for 

 Data consumption

 Termination

 Stops at the “right” time

 Deals with encodings


Summary of part 1
Summary of part 1

  • Local vs Global Control

  • Generic Algorithm

  • Local control:

    • Determinacy

      • lookahead

      • code duplication

    • Wfo’s

    • Wqo’s

      • -homeomorphic embedding


Part 2 controlling partial deduction global control and abstraction

1. Local Control &

Termination

2. Global Control &

Abstraction

Part 2: Controlling Partial Deduction:Global control and abstraction


Most specific generalisation msg
Most Specific Generalisation (msg)

  • A is more general than B iff : B=A

  • for every B,C there exists a most specific generalisation (msg) M:

    • M more general than B and C

    • if A more general then B and C then A is also more general than M

  • there exists an algorithm for computing the msg

  • (anti-unification, least general generalisation)


Msg examples

aa

ab

XY

p(a,b) p(a,c)

p(a,a) p(b,b)

q(X,a,a) q(a,a,X)

a

X

X

p(a,X)

p(X,X)

q(X,a,Y)

MSG examples


First abstraction operator
First abstraction operator

  • [Benkerimi,Lloyd’90] abstract two atoms in Ai+1by their msg if they have a common instance

  • will also ensureindependence(no renaming required)

  • Termination ?

Input: program P and goal G

Output: specialised program P’

Initialise: i=0, A0 = atoms in G

repeat

Ai+1 := Ai

for each aAido

Ai+1 := Ai+1 leaves(unfold(a))

end for

Ai+1 := abstract(Ai+1)

untilAi+1 = Ai

compute P’ via resultants+renaming


Rev acc

rev (L’’,[H’,H],R)

rev (L’’’,[H’’,H’,H],R)

rev (L’,[H],R)

rev (L’,[H],R)

rev (L’’,[H’,H],R)

Rev-acc

rev([],L,L).

rev([H|X],A,R) :- rev(X,[H|A],R).

rev (L,[],R)

unfold

unfold

rev(L,[],R)

unfold

rev(L’,[H],R)

rev(L’’,[H’,H],R)

rev(L’’’,[H’’,H’,H],R)


First terminating abstraction
First terminating abstraction

  • If more than one atom with same predicate in Ai+1: replace them by their msg

  • {rev (L,[],R), rev (L’,[H],R)}  {rev(L,A,R)}

  • Ensures termination

    • strict instance-of relation: wfo

    • either fixpoint or strictly smaller atom

  • But: loss of precision !

    • {rev ([a,b|L],[],R), rev (L,[b,a],R)}  {rev(L,A,R)} No specialisation !!


Global trees
Global trees

  • Also use wfo’s [MartensGallagher’95] or wqo’s for the global control [SørensenGlück’95, LeuschelMartens’96]

  • Arrange atoms in Ai+1 as a tree:

    • register which atoms descend from which(more precise spotting of non-termination)

  • Only if termination is endangered: apply msg on the offending atoms

    • use wfo or wqo on each branch


Rev acc1

rev (L,A,R)

rev (L’,[H|A],R)

rev (L’,[H],R)

Rev-acc

rev([],L,L).

rev([H|X],A,R) :- rev(X,[H|A],R).

rev (L,[],R)

unfold

unfold

rev(L,A,R)

rev(L,[],R)

DANGER

rev(L’,[H],R)


More precise spotting

app(X,[2,3,5],L)

app(X,[2],L)

More precise spotting

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

t(X,L) :- app(X,[2],L).

t(X,L) :- app(X,[2,3,5],L).

Global Tree

Set

DANGER

ABSTRACT

t(X,L)

t(X,L)

No

Danger

app(X,[2],L)

app(X,[2,3,5],L)


Examining syntactic structure
Examining Syntactic Structure

  • p([a],B,C)  p(A,[a],C)

  • abstract: yes or no ???


P append

app ([a],B,C)

 app (A,[a],C)

{C/[a|C’]}

{A/[H|A’],C/[H|C’]}

{A/[],C/[a]}

 app ([],B,C’)

 app (A’,[a],C’)

{C’/B}

p=append

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

  • Do not abstract:

app_a([],[a]).

app_a([H|A’],[H|C’]) :- app_a(A’,C’).

app_a(B,[a|B]).


P compos

compos ([a],B,C)

 compos (A,[a],C)

{A/[X|A’]}

{B/[X|B’]}

{B/[a],C/[a]}

{A/[a],C/[a]}

 app ([],B,C)

 app (A’,[],C)

fail

fail

compos([H|TX],[H|TY],[H]).

compos([X|TX],[Y|TY],E) :-

compos(TX,TY,E).

p=compos

  • Do abstract:

compos_a([a],[a]).

compos_a([a],[a]).


Characteristic trees
Characteristic Trees

  • Register structure of the SLD-trees:

    • position of selected literals

    • clauses that have been resolved with

  • [GallagherBruynooghe’91]: Only abstract atoms if same characteristic tree


Characteristic trees ii

Abstract:

Do not abstract:

Literal 1

Clause 1

Clause 2

stop

Literal 1

Clause 2

Literal 1

Literal 1

Literal 1

Clause 1

Clause 1

Clause 2

Clause 2

Clause 1

Literal 1

Literal 1

fail

fail

Characteristic Trees II


Rev acc revisited blackboard
Rev-acc revisited (Blackboard)

rev([],L,L).

rev([H|X],A,R) :- rev(X,[H|A],R).

rev ([a,b|C],[],R)


Path example blackboard
Path Example (Blackboard)

path(A,B,L) :- arc(A,B).

path(A,B,L) :-

not(member(A,L)), arc(A,C),

path(C,B,[a|L]).

member(X,[X|_]).

member(X,[_|L]) :- member(X,L).

arc(a,a).

path (a,b,[])

path (a,b,[a])


Characteristic trees problems
Characteristic Trees: Problems

  • [Leuschel’95]:

    • characteristic trees not preserved upon generalisation

    • Solution: ecological PD or constrained PD

  • [LeuschelMartens’96]:

    • non-termination can occur for natural examples

    • Solution: apply homeomorphic embedding o n characteristic trees

       ECCE partial deduction system (Full DEMO)


Summary of part 21
Summary of part 2

  • Most Specific Generalisation

  • Global trees

    • reusing wfo’s, wqo’s at the global level

  • Characteristic trees

    • advantage over purely syntactic structure


Summary of lectures 2 3
Summary of Lectures 2-3

  • Foundations of partial deduction

    • resultants, correctness criteria & results

  • Controlling partial deduction

    • generic algorithm

    • local control:

      • determinacy, wfo’s, wqo’s

    • global control:

      • msg, global trees, characteristic trees


Optimisation of declarative programs lecture 4

Optimisation of Declarative ProgramsLecture 4

Michael Leuschel

Declarative Systems & Software Engineering

Dept. Electronics & Computer Science

University of Southampton

http://

www.ecs.soton.ac.uk/~mal


Part 1 a limitation of partial deduction
Part 1:A Limitation of Partial Deduction


Sld trees1

knows_logic(Z)

 good_student(X)teacher(Y,X) logician(Y)

 teacher(Y,tom)logician(Y)

 logician(peter)

 teacher(Y,jane) logician(Y)

fail

SLD-Trees

Prolog: explores this tree Depth-First, always selects leftmost literal


Partial deduction1

Basic Principle:

Instead of building one complete SLD-tree:Build a finite number of finite “SLD- trees” !

SLD-trees can be incomplete

4 types of derivations in SLD-trees:

Successful, failed, infinite

Incomplete: no literal selected

Partial Deduction


Formalisation of partial deduction1
Formalisation of Partial Deduction

  • Given a set S = {A1,…,An} of atoms:

  • Build finite, incomplete SLD-trees for each  Ai

  • For every non-failing branch:

    • generate 1 specialised clause bycomputing the resultants

  • Why only atoms ?

    • Resultants = Horn clauses

  • Is this a limitation ??


Control local vs global1

A

A1 A2 A3 A4 ...

Control:Local vs Global

  • Local Control:

    • Decide upon the individual SLD-trees

    • Influences the code generated for each Ai

       unfolding rule

      • given a program P and goal G returns finite, possibly incomplete SLD-tree

  • Global control

    • decide which atoms are in A

       abstraction operator


Generic algorithm1
Generic Algorithm

Input: program P and goal G

Output: specialised program P’

Initialise: i=0, A0 = atoms in G

repeat

Ai+1 := Ai

for each aAido

Ai+1 := Ai+1 leaves(unfold(a))

end for

Ai+1 := abstract(Ai+1)

untilAi+1 = Ai

compute P’ via resultants+renaming


Limitation 1 side ways information

t(Y)

 p(Y)

 q(Y)

{Y/a}

{Y/b}

 p(Y),q(Y)

Limitation 1: Side-ways Information

t(X) :- p(X), q(X).

p(a).

q(b).

Original program !


Limitation 1 solution

t(Y)

 p(Y),q(Y)

Limitation 1: Solution ?

t(X) :- p(X), q(X).

p(a).

q(b).

  • Just unfold deeper

t(X) :- fail.

 q(a)

fail


Limitation 1 solution1

t(Y)

 p(Y),q(Y)

Limitation 1: Solution ?

t(X) :- p(X), q(X).

p(a).

p(X) :- p(X).

q(b).

  • We cannot fullyunfold !!

 q(a)

 p(Y),q(Y)

fail


Limitation 1
Limitation 1

  • Side-ways information passing

    • solve(Call,Res),manipulate(Res)

  • Full unfolding not possible:

    • termination (e.g.  of Res)

    • code explosion

    • efficiency (work duplication)

  • Ramifications:

    • difficult to automatically specialise interpreters(aggressive unfolding required)

    • no tupling, no deforestation


Limitation 1 solutions
Limitation 1: Solutions

  • Pass least upper bound around

    • For a query A,B:

      • Take msg of all (partial) solutions of A

      • Apply msg to B

  • Better:

    • Specialise conjunctions instead of atoms

  •  Conjunctive Partial Deduction


Part 2 conjunctive partial deduction
Part 2:Conjunctive Partial Deduction


Conjunctive partial deduction
Conjunctive Partial Deduction

  • Given a set S = {C1,…,Cn} of atoms:

  • Build finite, incomplete SLD-trees for each  Ci

  • For every non-failing branch:

    • generate 1 specialised formula CiL bycomputing the resultants

  • To get Horn clauses

    • Rename conjunctions into atoms !

       Assign every Ci an atom with the same variables and each with a different predicate name


Renaming function

rename

Renaming Function

p(Y),q(Y)

p(f(X)),q(b)  p(X),q(X)

p(f(X)),q(b)  p(X),r(X),q(X)

pq(Y)

pq(f(X),b) pq(X)

pq(f(X),b) pq(X), r(X)

or

pq(f(X),b)  r(X), pq(X)

 Resolve ambiguities

contiguous: if no reordering allowed


Soundness completeness1
Soundness & Completeness

P’ obtained by conjunctive partial deduction from P

If non-triviality, closedness wrt renaming function:

  • P’  {G} has an SLD-refutation with c.a.s.  iff P  {G}has

    If in addition fairness (or weak fairness):

  • P’ {G} has a finitely failed SLD-tree iffP  {G}has

     see Leuschel et al: JICSLP’96


Maxlength example blackboard

rename

Maxlength Example (Blackboard)

max_length(X,M,L) :- max(X,M),len(X,L).

max(X,M) :- max(X,0,M).

max([],M,M).

max([H|T],N,M) :- H=<N,max(T,N,M).

max([H|T],N,M) :- H>N,max(T,H,M).

len([],0).

len([H|T],L) :- len(T,TL),L is TL+1.

maxlen (X,M,L)

max(X,N,M),len(X,L)

ml(X,N,M,L)


Double append example blackboard

??

Double-Append Example (blackboard)

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

 app (A,B,I), app(I,C,Res)

da([],B,B,C,Res) :- app(B,C,Res).

da([H|A’],B,[H|I’],C,[H|R’]) :- da(A’,B,I’,C,R’).

da([],B,C,Res) :- app(B,C,Res).

da([H|A’],B,C,[H|R’]) :- da(A’,B,C,R’).


Controlling cpd
Controlling CPD

  • Local Control

    • becomes easier

    • we have to be less aggressive

    • we can concentrate on efficiency of the code

  • Global Control

    • gets more complicated

    • handle conjunctions

    • structure abstraction (msg) no longer sufficient !


Global control of cpd
Global Control of CPD

  • p(0,0).

  • p(f(X,Y),R) :- p(X,R),p(Y,R).

  • Specialise p(Z,R)

  • We need splitting

  • How to detect infinite branches:

    • homeomorphic embedding on conjunctions

  • How to split:

    • most specific embedded conjunction


Homeomorphic embedding on conj
Homeomorphic Embedding on Conj.

  • Define: and/m  and/n

    f(s1,…,sn)  g(t1,…,tm) if f  g andi: si tji with 1j1<j2<…<jnm and(q,p(b))  and(r,q,p(f(b)),s)


Most specific embedded subconj

Splitting

Most Specific Embedded Subconj.

p(X) q(f(X))

t(Z) p(s(Z))  q(a)  q(f(f(Z)))

{ t(Z), q(a), p(s(Z))  q(f(f(Z))) }


Generic algorithm2

Don’tseparate intoatoms

Msg +splitting

Generic Algorithm

  • Plilp’96, Lopstr’96, JLP’99

Input: program P and goal G

Output: specialised program P’

Initialise: i=0, A0 = {G}

repeat

Ai+1 := Ai

for each cAido

Ai+1 := Ai+1leaves(unfold(c))

end for

Ai+1 := abstract(Ai+1)

untilAi+1 = Ai

compute P’ via resultants+renaming

Changesover PD


Ecce demo of conjunctive partial deduction
Ecce demo of conjunctive partial deduction

  • Max-length

  • Double append

  • depth, det. Unf. with PD and CPD

  • rotate-prune


Comparing with fp
Comparing with FP

  • Conjunctions

    • can represent nested function calls (supercompilation [Turchin], deforestation [Wadler])g(X,RG),f(RG,Res) = f(g(X))

    • can represent tuples (tupling [Chin])g(X,ResG),f(X,ResF) = f(X),g(X)

  • We can do both deforestation and tupling !

    • No such method in FP

  • Semantics for infinite derivations (WFS) in LP

    • safe removal of infinite loops, etc…

    • applications: interpreters,model checking,...


Comparing with fp ii
Comparing with FP II

  • Functionality has to be inferred in LP

    • fib(N,R1),fib(N,R2) vs fib(N)+fib(N)

  • Dataflow has to be analysed

    • rot(Tree,IntTree),prune(IntTree,Result):rot(L,Tl),rot(R,Tr),prune(Tl,Rl),prune(Tr,Rr) vs

    • prune(rot(Tree)): tree( prune(rot(L)), I, prune(rot(R)) )

  • Failure possible:

    • reordering problematic ! fail,loop  loop,fail


Part 3 another limitation of partial deduction and cpd
Part 3:Another Limitation of Partial Deduction (and CPD)


Limitation 2 global success information
Limitation 2: Global Success Information

 p(Y)

p(a).

p(X) :- p(X).

  • We get sameprogram back

  • The fact that in all successful derivations X=a is not explicit !

 p(Y)


Limitation 2 failure example

t(Y)

 p(Y),q(Y)

Limitation 2: Failure example

t(X) :- p(X), q(X).

p(a).

p(X) :- p(X).

q(b).

  • Specialised program:

    t(X) :- pq(X).

    pq(X) :- pq(X).

  • But even better:

    t(X) :-fail.

 q(a)

 p(Y),q(Y)

fail


Append last example blackboard

??

Append-last Example (blackboard)

app([],L,L).

app([H|X],Y,[H|Z]) :- app(X,Y,Z).

last([X],X).

last([H|T],X) :- last(T,X).

 app (A,[a],I), last(I,X)

al([],[a],a).

al([H|A’],[H|I’],X) :- al(A’,I’,X).

al([],[a],a).

al([H|A’],[H|I’],a) :- al(A’,I’,a).


Limitation 2 ramifications
Limitation 2: Ramifications

Fully known value

?

Known

binding

Binding

Lost !

Partially knownDatastructure

Environment for interpreter


Part 4 combining abstract interpretation with conjunctive partial deduction
Part 4:Combining Abstract Interpretation with (Conjunctive) Partial Deduction


More specific version msv
More Specific Version (Msv)

  • [Marriot,Naish,Lassez88]

  • Abstract interpretation technique

    • abstraction of TP

      • sem(P) = lfp(TP)

    • abstract domain = concrete domain

    • (A) = {A|  substitution}

  • same domain as PD, CPD !!


T p operator

1

n

TP operator

G

B

A

F

C

H

D

E

  • Head :- B1, B2, …, Bn.

  • Head 1…n

S

TP(S)

Head1…n


T p operator1
TP operator

  • Theorem [VanEmden,Kowalski]

    • least Herbrand Model =

    • all ground logical consequences of P =

    • success set of SLD (ground queries) =

    • lfp(TP) =

    • TP()


T p operator example
TP operator: Example

q.

r :- q.

p : - p.

nat(0).

nat(s(X)) :- nat(X).

  • TP1() = {q, nat(0)}

  • TP2() = {q, r, nat(0), nat(s(0))}

  • TP() = {q, r, nat(0), nat(s(0)), … }


Abstraction of tp
Abstraction of TP

  • Compose TP with predicate-wise msg

  • Ensures termination

    q.

    r :- q.

    p : - p.

    nat(0).

    nat(s(X)) :- nat(X).

    • TP1() = {q, nat(0)}

    • TP2() = {q, r, nat(0), nat(s(0))}

    • TP*2() = TP*() = {q, r, nat(X)}

Msg = nat(X)


More specific version principles
More Specific Version: Principles

  • Compute S = TP*()

  • For every body atom of the program:

    • unify with an element of S

    • if none exists: clause can be removed !

  • The resulting program is called a more specific version of P

  • Correctness:

    • computed answers preserved

    • finite failure preserved

    • infinite failure can be replaced by finite failure


Msv applied to append last

MSV

MSV applied to append-last

al([],[a],a).

al([H|A’],[H|I’],X) :- al(A’,I’,X).

TP1() = {al([],[a],a)}

TP2() = {al([],[a],a), al([H],[H,a],a)}

TP*2() = {al(L,[X|Y],a)}

TP(TP*2()) = {al([],[a],a), al([H|L],[H,X|Y],a)}

TP*3() = TP*() = {al(L,[X|Y],a)}

Note: msv on original

program gives no result !!

al([],[a],a).

al([H|A’],[H,H’|I’],a) :- al(A’,[H’|I’],a).


Na ve combination
Naïve Combination

  • repeat

    • Conjunctive Partial Deduction

    • More Specific Version

  • until fixpoint (or we are satisfied)

  • Can be done with ecce

  • Power/Applications:

    • cf. Demo (later in the talk)

    • interpreters for the ground representation


Even odd example blackboard
Even-Odd example (blackboard)

even(0).

even(s(X)) :- odd(X).

odd(s(X)) :- even(X).

eo(X) :- even(X),odd(X).

  • Msv alone: not sufficient !

  • Specialise eo(X) + msv:

    • eo(X) :- fail.


Refined algorithm
Refined Algorithm

  • [Leuschel,De Schreye Plilp’96]

  • Specialisation continued before fixpoint of msv is reached !

  • Required when several predicates interact

    • E.g.: for proving functionality of multiplication


Itp example demo
ITP example (demo)

plus(0,X,X).

plus(s(X),Y,s(Z)) :- plus(X,Y,Z).

pzero(X,XPlusZero) :- plus(X,0,XPlusZero).

passoc(X,Y,Z,R1,R2) :- plus(X,Y,XY),plus(XY,Z,R1),

plus(Y,Z,YZ),plus(X,YZ,R2).

=X !

=R1 !


Model checking example
Model Checking Example

sema

 state !

enter_cs

exit_cs

restart

trace([],State,State).

trace([Action|As],InState,OutState) :-

trans(Action,InState,S1), trace(As,S1,OutState).

trans(enter_cs,[s(X),s(Sema),CritSec,Y,C],

[X,Sema,s(CritSec),Y,C]).

trans(exit_cs, [X,Sema,s(CritSec),Y,C],

[X,s(Sema),CritSec,s(Y),C]).

trans(restart, [X,Sema,CritSec,s(Y),ResetCtr],

[s(X),Sema,CritSec,Y,s(ResetCtr)]).

 number of systems !


Model checking continued
Model Checking continued

unsafe(X) :-

trace(Tr,[X,s(0),0,0,0],[_,_,s(s(_)),_,_]).

  • Specialise unsafe(s(0))

    • unsafe(s(0)) :- fail.

  • Specialise unsafe(X)

    • unsafe(X) :- fail.


Ecce msv demo
Ecce + Msv Demo

  • Append last

  • Even odd (itp)

  • pzero (itp)

  • passoc (itp)

  • Infinite Model Checking (petri2)


Full reconicilation
Full Reconicilation

  • Why restrict to simple abstract domain

    • ( = all instances)

  • Why not use

    • groundness dependencies

    • regular types

    • type graphs

p(0)

Msg = p(X)

p(s(0))

Better: p() ;  = 0|s()


Full reconicilation ii
Full Reconicilation II

  • Partial Deduction has to be extended

    • handle any (downwards closed) domain

  • Abstract Interpretation has to be extended

    • generate totally correct code (not just safe approximation)

  • Framework: [Leuschel JICSLP’98]

    • no implementation yet !


Summary of lecture 4
Summary of Lecture 4

  • Conjunctive partial deduction

    • better precision

    • tupling

    • deforestation

  • More Specific Versions

    • simple abstract interpretation technique

  • Combination of both

    • powerful specialiser/analyser

    • infinite model checking


ad