Foundations of Probabilistic Answers to Queries. Dan Suciu and Nilesh Dalvi University of Washington. Databases Today are Deterministic. An item either is in the database or is not A tuple either is in the query answer or is not This applies to all variety of data models:
Related searches for Foundations of Probabilistic Answers to Queries
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Probabilistic relational databases have been studied from the late 80’s until today:
Application pull:
Technology push:
The tutorial is built on these two themes
Need to manage imprecisions in data
The quest to manage imprecisions = major driving force in the database community
A large class of imprecisions in datacan be modeled with probabilities
Processing probabilistic data is fundamentally more complex than other data models
There exists a rich collection of powerful, nontrivial techniques and results, some old, some very recent, that could lead to practical management techniques for probabilistic databases.
Identify the source of complexity,present snapshots of nontrivial results,set an agenda for future research.
There is a huge amount of related work:
probabilistic db, topk answers, KR, probabilistic reasoning, random graphs, etc, etc.
We left out many references
All references used are available in separate document
Tutorial available at: http://www.cs.washington.edu/homes/suciu
Requires TexPoint to view
http://www.thp.unikoeln.de/~ang/texpoint/index.html
Part I: Applications: Managing Imprecisions
Part II: A Probabilistic Data Semantics
Part III: Representation Formalisms
Part IV: Theoretical foundations
Part V: Algorithms, Implementation Techniques
Summary, Challenges, Conclusions
BREAK
Applications: Managing Imprecisions
Database is deterministic
The query returns a ranked list of tuples
Query is overspecified: no answers
Example: try to buy a house in Seattle…
[Agrawal,Chaudhuri,Das,Gionis 2003]
The Empty Answers ProblemSELECT *
FROM HousesWHERE bedrooms = 4 AND style = ‘craftsman’ AND district = ‘View Ridge’ AND price < 400000
… good luck !
Today users give up and move to Baltimore
Compute a similarity score between a tuple and the query
Q = (v1, …, vm)
Query is a vector:
T = (u1, …, um)
Tuple is a vector:
[Agrawal,Chaudhuri,Das,Gionis 2003]
Q = SELECT *
FROM R WHERE A1=v1 AND … AND Am=vm
Rank tuples by their TF/IDF similarity to the query Q
Includes partial matches
Beyond a single table: “Find the good deals in a neighborhood !”
SELECT *
FROM Houses xWHERE x.bedrooms ~ 4 AND x.style ~ ‘craftsman’ AND x.price ~ 600k AND NOT EXISTS (SELECT *
FROM Houses y WHERE x.district = y.district AND x.ID != y.ID AND y.bedrooms ~ 4 AND y.style ~ ‘craftsman’ AND y.price ~ 600k
Users specify similarity predicates with ~
System combines atomic similarities using probabilities
[Theobald&Weikum:2002,Hung,Deng&Subrahmanian:2004]
[Hristidis&Papakonstantinou’2002,Bhalotia et al.2002]
Keyword Searches in DatabasesGoal:
Techniques:
Author
Author
Abiteboul
Person
Widom
Person
Paper
Conference
Paper
Author
Author
Abiteboul
Person
Widom
Person
Conference
Paper
Author
Widom
Person
Abiteboul
Person
[Hristidis,Papakonstantinou’2002]
Join sequences(tuple trees):
Q = ‘Abiteboul’ and ‘Widom’
In
Paper
Conference
Author
Editor
Person
[Kiessling&Koster2002,Chomicki2002,Fagin&Wimmers1997]
More Ranking: User PreferencesApplications: personalized search engines, shopping agents, logical user profiles, “soft catalogs”
Two approaches:
Types of imprecision addressed:
Data is precise, query answers are imprecise:
Probabilistic approach would…
Determine if two data recordsdescribe same object
Scenarios:
[Cohen: Tutorial; Fellegi&Sunder:1969]
FellegiSunter ModelA probabilistic model/framework
Goal: partition A ´ B into:
{a1, a2, a3, a4, a5, a6}
A =
{b1, b2, b3, b4, b5}
B =
Deterministic linkage
[Hernandez,Stolfo:1995]
[Ananthakrishna,Chaudhuri,Ganti:2002]
[Chaudhuri,Ganjam,Ganti,Motwani:2002]
[Galhardas et al.:2001]
WHIRL
Matches two sets on the fly, butnot really a “record linkage” application.
datalog
Example 1:
Q1(*) : P(Company1,Industry1), Q(Company2,Website), R(Industry2, Analysis), Company1 ~ Company2, Industry1 ~ Industry2
Score of an answer tuple = product of similarities
Example 2 (with projection):
Q2(Website) : P(Company1,Industry1), Q(Company2,Website), R(Industry2, Analysis), Company1 ~ Company2, Industry1 ~ Industry2
Support(t) = set of tuples supporting the answer t
Dependson queryplan !!
score(t) = 1  Õs 2 Support(t) (1score(s))
Types of imprecision addressed:
Same entity represented in different ways
A probability model would…
[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01][Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
3. Quality in Data IntegrationUse of probabilistic information to reason about soundness, completeness, and overlap of sources
Applications:
Global Historical Climatology Network[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
Integrates climatic data from:
6000 temperature stations
7500 precipitation stations
2000 pressure stations
Starting with 1697 (!!)
[Mendelzon&Mihaila:2001
Soundness of a data source: what fraction of items are correct
Completeness data source: what fractions of items it actually contains
Local as view:[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
S3:
V3(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “US”), y ³ 1800
. . .
S8756:
. . .
[Mendelzon&Mihaila:2001
Global schema:
Temperature
Station
S1:
V1(s, lat, lon, c) ¬Station(s, lat, lon c)
S2:
V2(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “Canada”), y ³ 1900
Next, declare soundness and complete[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
[Florescu,Koller,Levy:1997;Mendelzon&Mihaila:2001]
V2(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “Canada”), y ³ 1900
S2:
Precision
Soundness(V2) ³ 0.7Completneess(V2) ³ 0.4
Recall
[Mendelzon&Mihaila:2001][Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
Goal 2: soundness ! query confidence
Q(y, v) : Temperature(s, y, m, v), Station(s, lat, lon, “US”), y ³ 1950, y · 1955, lat ¸ 48, lat · 49
Answer:
[Florescu,Koller,Levy:1997
Goal 1: completeness ! order source accesses
S5
S74
S2
S31
. . .
Types of imprecision addressed
Overlapping, inconsistent, incomplete data sources
They use already a probabilistic model
[Bertosi&Chomicki:2003][Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
4. Inconsistent DataGoal: consistent query answers from inconsistent databases
Applications:
[Bertosi&Chomicki:2003][Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]
The Repair SemanticsConsiderall “repairs”
Key(?!?)
Find people in State=WA Þ Dalvi
Find people in State=MA Þ;
Hi precision, but low recall
State=WA Þ Dalvi, Balazinska(0.5), Miklau(0.5)
State=MA Þ Balazinska(0.5), Miklau(0.5)
Lower precision, but better recall
Types of imprecision addressed:
A probabilistic would…
Goal
Applications:
V=anonymized transactions
V=standard view(s)
V=kanonymous table
S = some atomic fact that is private
[Evfimievski,Gehrke,Srikant:03; Miklau&S:04;Miklau,Dalvi&S:05]
Pr(S)
= a priori probability of S
Pr(S  V)
= a posteriori probability of S
[Evfimievski,Gehrke,Srikant:03; Miklau&S:04;Miklau,Dalvi&S:05]
Information DisclosurePr(S) £r1 and Pr(S  V) ³r2
Pr(S) = Pr(S  V)
limdomain size ®¥ Pr(S  V) = 0
Database sizeremains fixed
Is this a type of imprecision in data ?
Techniques
Important fundamental duality:
They share the same fundamental concepts and techniques
What is required from the probabilistic model
Size(Employee) ' 1000
areacodeÃcity
[Widom:2005]
[Deshpande, Guestrin,Madden:2004]
Semex [Dong&Halevy:2005, Dong,Halevy,Madhavan:2005]Heystack [Karger et al. 2003], Magnet [Sinha&Karger:2005]
[Dalvi&S;2005]
Common in these applications:
Need for common probabilistic model:
A Probabilistic Data Semantics
Attribute domains:
int, char(30), varchar(55), datetime
# values: 232, 2120, 2440, 264
Relational schema:
Employee(name:varchar(55), dob:datetime, salary:int)
# of tuples: 2440£ 264£ 223
# of instances: 22440£ 264£ 223
Database schema:
Employee(. . .), Projects( . . . ), Groups( . . .), WorksFor( . . .)
# of instances: N (= BIG but finite)
Definition Miklau&S:04;Miklau,Dalvi&S:05]A probabilistic database Ipis a probability distribution on INST
Pr : INST ! [0,1]
s.t. åi=1,N Pr(Ii) = 1
The DefinitionThe set of all possible database instances:
INST = {I1, I2, I3, . . ., IN}
will use Pr or Ipinterchangeably
Definition A possible world is I s.t. Pr(I) > 0
Ip =
Pr(I2) = 1/12
Pr(I1) = 1/3
Pr(I4) = 1/12
Pr(I3) = 1/2
Possible worlds = {I1, I2, I3, I4}
One tuple t ) event t 2 I
Pr(t) = åI: t 2 I Pr(I)
Two tuples t1, t2) event t12 I Æ t22 I
Pr(t1 t2) = åI: t12 I Æ t22 I Pr(I)
Pr(t1 t2) = 0
Disjoint

Pr(t1 t2) < Pr(t1) Pr(t2)
Negatively correlated

Pr(t1 t2) = Pr(t1) Pr(t2)
Independent
0
Pr(t1 t2) > Pr(t1) Pr(t2)
Positively correlated
+
Pr(t1 t2) = Pr(t1) = Pr(t2)
Identical
++
Ip =
++


Pr(I2) = 1/12
Pr(I1) = 1/3

+
Pr(I4) = 1/12
Pr(I3) = 1/2
Given a query Q and a probabilistic database Ip,what is the meaning of Q(Ip) ?
Semantics 1: Possible Answers
A probability distributions on sets of tuples
8 A. Pr(Q = A) = åI 2 INST. Q(I) = A Pr(I)
Semantics 2: Possible Tuples
A probability function on tuples
8 t. Pr(t 2 Q) = åI 2 INST. t2 Q(I) Pr(I)
Purchasep
SELECT DISTINCT x.product
FROM Purchasep x, Purchasep y
WHERE x.name = 'John' and x.product = y.product and y.name = 'Sue'
Pr(I1) = 1/3
Possible answers semantics:
Pr(I2) = 1/12
Pr(I3) = 1/2
Possible tuples semantics:
Pr(I4) = 1/12
INST = Miklau&S:04;Miklau,Dalvi&S:05]P(TUP)N = 2M
TUP = {t1, t2, …, tM}
= all tuples
pr : TUP ! [0,1]
No restrictions
Special CaseTuple independent probabilistic database
Pr(I) = Õt 2 I pr(t) £Õt Ï I (1pr(t))
I Miklau&S:04;Miklau,Dalvi&S:05]1
I2
I3
I4
I5
I6
I7
I8
(1p1)(1p2)(1p3)
p1(1p2)(1p3)
(1p1)p2(1p3)
(1p1)(1p2)p3
p1p2(1p3)
p1(1p2)p3
(1p1)p2p3
p1p2p3
Tuple Prob. ) Possible WorldsE[ size(Ip) ] = 2.3 tuples
å = 1
J =
Ip =
;
SELECT DISTINCT x.city
FROM Person x, Purchase y
WHERE x.Name = y.Customer and y.Product = ‘Gadget’
p1( )
1(1q2)(1q3)
1 (1 )£(1  )
p2( )
1(1q5)(1q6)
p3 q7
Possible Worlds Semantics
Query semantics
Possible answers semantics
Possible tuples semantics
Part III: Representation Formalisms
Part IV: Foundations
Part V: Algorithms, implementation techniques
Conclusions and Challenges
Representation Formalisms
ProblemNeed a good representation formalism
Main open problem in probabilistic db
A complete formalism:
Incomplete formalisms:
[Fuhr&Roellke:1997] Miklau&S:04;Miklau,Dalvi&S:05]
Intensional DatabaseAtomic event ids
e1, e2, e3, …
Probabilities:
p1, p2, p3, … 2 [0,1]
Event expressions: Æ, Ç, :
e3Æ (e5Ç: e2)
Intensional probabilistic database J: each tuple t has an event attribute t.E
e Miklau&S:04;Miklau,Dalvi&S:05]1e2e3=
000
001
010
011
100
101
110
111
(1p1)(1p2)(1p3)+(1p1)(1p2)p3+(1p1)p2(1p3)+p1(1p2)(1p3
p1(1p2) p3
(1p1)p2 p3
p1p2(1p3)+p1p2p3
Intensional DB ) Possible WorldsJ =
Ip
;
E Miklau&S:04;Miklau,Dalvi&S:05]1 = e1E2 = :e1Æ e2
E3 = :e1Æ:e2Æ e3
E4 = :e1Æ:e2Æ:e3Æ e4
Pr(e1) = p1Pr(e2) = p2/(1p1)
Pr(e3) = p3/(1p1p2)
Pr(e4) = p4 /(1p1p2p3)
“Prefix code”
Possible Worlds ) Intensional DBp1
p2
=Ip
J =
p3
p4
Intesional DBs are complete
[Fuhr&Roellke:1997] Miklau&S:04;Miklau,Dalvi&S:05]
Closure Under OperatorsP

s
£
One still needs to compute probability of event expression
Event expression for each tuple
Complete (in some sense) … but impractical
Important abstraction: consider restrictions
Related to ctables
[Imilelinski&Lipski:1984]
Explicit tuples
Implicit tuples
0 Miklau&S:04;Miklau,Dalvi&S:05]
0
independent
Explicit TuplesIndependent tuples
Atomic, distinct.
May use TIDs.
tuple = event
E[ size(Customer) ] = 1.6 tuples
Step 1:evaluate ~ predicates
SELECT DISTINCT x.city
FROM Person x, Purchase y
WHERE x.Name = y.Cust and y.Product = ‘Gadget’ and x.profession ~ ‘scientist’ and y.category ~ ‘music’
Step 1:evaluate ~ predicates
SELECT DISTINCT x.city
FROM Personp x, Purchasep y
WHERE x.Name = y.Cust and y.Product = ‘Gadget’ and x.profession ~ ‘scientist’ and y.category ~ ‘music’
Step 2:evaluate restof query
Independent/disjoint tuples
Independent events: e1, e2, …, ei, …
Split ei into disjoint “shares” ei = ei1Ç ei2Ç ei3Ç…
e34, e37) disjoint events
e37, e57) independent events

0
SELECT DISTINCT ProductFROM CustomerWHERE City = ‘Seattle’
Step 1:resolve violations
Name ! City (violated)
SELECT DISTINCT ProductFROM CustomerpWHERE City = ‘Seattle’
++

0
Step 2:evaluate query
Step 1:resolve violations
E[ size(Customer) ] = 2 tuples
[Barbara et al.92, Lakshmanan et al.97,Ross et al.05;Widom05]
Inaccurate Attribute ValuesInaccurate attributes
Disjoint and/or independentevents
Independent or disjoint/independent tuples
In KR:
[Friedman,Getoor,Koller,Pfeffer:1999]
[Mendelzon&Mihaila:2001,Widom:2005,Miklau&S04,Dalvi et al.05]
Implicit Tuples“There are other, unknown tuples out there”
Covers 10%
or
Completeness =10%
30 other tuples
[Miklau,Dalvi&S:2005,Dalvi&S:2005] al.05]
Implicit TuplesStatistics based:
Employee
Semantics 1:size(Employee)=C
C tuples(e.g. C = 30)
Semantics 2:E[size(Employee)]=C
We go with #2: the expected size is C
[Miklau,Dalvi&S:2005,Dalvi&S:2005] al.05]
Implicit Possible TuplesBinomial distribution
Employee(name, dept, phone)
n1 =  Dname
n2 =  Ddept
n3 =  Dphone
8 t. Pr(t) = C / (n1 n2 n3)
E[ Size(Employee) ] = C
[Miklau,Dalvi&S:2005] al.05]
Pr(name,dept,phone) = C / (n1 n2 n3)
Application 3: Information LeakageS : Employee(“Mary”, , 5551234)
Pr(S) @ C/n1n3
V1 : Employee(“Mary”, “Sales”, )
Pr(S  V1) @ 1/ n3
Pr(SV1) @ C/n1n2n3Pr(V1) @ C/n1n2
Practical secrecy
V2 : Employee(, “Sales”, 5551234)
Pr(S  V1V2) @ 1
Pr(SV1V2) @ C/n1n2n3Pr(V1 V2) @ C/n1n2 n3
Leakage
Given by expected cardinality
May be used in conjunction with other formalisms
Conditional probabilities become important
[Domingos&Richardson:2004,Dalvi&S:2005]
Foundations
Needed for query processing al.05]
Probability of Boolean ExpressionsE = X1X3Ç X1X4Ç X2X5 Ç X2X6
Randomly make each variable true with the following probabilities
Pr(X1) = p1, Pr(X2) = p2, . . . . . , Pr(X6) = p6
What is Pr(E) ???
Answer: regroup cleverly
E = X1 (X3Ç X4 ) Ç X2 (X5 Ç X6)
Pr(E)=1  (1p1(1(1p3)(1p4))) (1p2(1(1p5)(1p6)))
Now let’s try this: al.05]
E = X1X2Ç X1X3Ç X2X3
No clever grouping seems possible.
Brute force:
Pr(E)=(1p1)p2p3 + p1(1p2)p3 + p1p2(1p3) + p1p2p3
Seems inefficient in general…
[Valiant:1979] al.05]
Complexity of Boolean Expression ProbabilityTheorem [Valiant:1979]For a boolean expression E, computing Pr(E) is #Pcomplete
NP = class of problems of the form “is there a witness ?” SAT
#P = class of problems of the form “how many witnesses ?” #SAT
The decision problem for 2CNF is in PTIMEThe counting problem for 2CNF is #Pcomplete
Data complexity of a query Q:
Simplest scenario only:
[Fuhr&Roellke:1997,Dalvi&S:2004] al.05]
or: p1 + p2 + …
Extensional Query EvaluationRelational ops compute probabilities
P
s
£

Data complexity: PTIME
£ al.05]
P
P
£
[Dalvi&S:2004]
SELECT DISTINCT x.City
FROM Personp x, Purchasep y
WHERE x.Name = y.Cust and y.Product = ‘Gadget’
Wrong !
Correct
Depends on plan !!!
[Dalvi&S:2004] al.05]
Query ComplexitySometimes @ correct extensional plan
Data complexityis #P complete
Qbad : R(x), S(x,y), T(y)
Extensional query evaluation:
General query complexity
[Lakshmanan et al.1997]
Probabilistic database have high query complexity
Random graph G al.05]p
Graph G:
2
1
2
Edges hidinghere
4
1
3
4
3
n
n
[Erdos&Reny:1959,Fagin:1976,Spencer:2001]
Random GraphsG(x,y)
Relation:
D={1,2, …, n}
Domain:
pr(t1) = … pr(tM) = p
Gp = tupleindependent
What is limn!1 Q(Gp)
Boolean query Q
[Fagin:1976] al.05]
Fagin’s 0/1 LawLet the tuple probability be p = 1/2
Theorem [Fagin:1976,Glebskii et al.1969]
For every sentence Q in First Order Logic, limn!1 Q(Gp) exists and is either 0 or 1
Examples
[Erdos&Reny:1959] al.05]
Erdos and Reny’s Random GraphsNow let p = p(n) be a function of n
Theorem [Erdos&Ren]y:1959]
For any monotone Q, 9 a threshold function t(n) s.t.: if p(n) ¿ t(n) then limn!1Q(Gp)=0 if p(n) À t(n) then limn!1Q(Gp)=1
Remark: C(n) = E[ Size(G) ] al.05]' n2p(n)
The expected size C(n) “grows” from 0 to n2.How does the random graph evolve ?
[Erdos&Reny:1959; Spencer:2001]
The Evoluation of Random GraphsThe tuple probability p(n) “grows” from 0 to 1.How does the random graph evolve ?
0
1
…
[Spencer:2001] al.05]
On the k’th Day1/n1+1/(k1)¿ p(n) ¿ 1/n1+1/k
n11/(k1)¿ C(n) ¿ n11/k
0/1 Law holds
The graph is disconnected
[Spencer:2001] al.05]
On Day wn1e¿ C(n) ¿ n, 8e > 0
1/n1+e¿ p(n) ¿ 1/n, 8e > 0
0/1 Law holds
The graph is disconnected
[Spencer:2001] al.05]
Past the Double Jump (1/n)1/n ¿ p(n) ¿ ln(n)/n
n ¿ C(n) ¿ n ln(n)
0/1 Law holds
The graph is disconnected
[Spencer:2001] al.05]
Past Connectivityln(n)/n ¿ p(n) ¿ 1/n1e, 8e
n ln(n) ¿ C(n) ¿ n1+e, 8e
Strange logicof random graphs !!
0/1 Law holds
The graph is connected !
0/1 Law holds al.05]
Fagin’s framework: a = 0
p(n) = O(1)
C(n) = O(n2)
[Spencer:2001]
Big Graphsp(n) = 1/na, a2 (0,1)
C(n) = n2a, a2 (0,1)
0/1 Law holds
a is irrational )
0/1 Law does not hold
a is rational )
Algorithms,Implementation Techniques
Top k answers
1. Simulation
ProbabilisticQuery engine
2. Extensional joins
SQL Query
Probabilisticdatabase
3. Indexes
[Karp,Luby&Madras:1989] al.05]
1. Monte Carlo SimulationNaïve:
E = X1X2Ç X1X3Ç X2X3
X1X2
X1X3
Cnt Ã 0
repeat N times
randomly choose X1, X2, X32 {0,1}if E(X1, X2, X3) = 1 then Cnt = Cnt+1
P = Cnt/N
return P /* ' Pr(E) */
X2X3
May be very big
0/1estimatortheorem
Theorem. If N ¸ (1/ Pr(E)) £ (4ln(2/d)/e2) then:
Pr[  P/Pr(E)  1  > e ] < d
Works for any E
Not in PTIME
[Karp,Luby&Madras:1989] al.05]
Monte Carlo SimulationImproved:
E = C1Ç C2Ç . . . Ç Cm
Cnt Ã 0; S Ã Pr(C1) + … + Pr(Cm);
repeat N times randomly choose i 2 {1,2,…, m}, with prob. Pr(Ci) / S
randomly choose X1, …, Xn2 {0,1} s.t. Ci = 1if C1=0 and C2=0 and … and Ci1 = 0 then Cnt = Cnt+1
P = Cnt/N * 1/
return P /* ' Pr(E) */
Now it’s better
Theorem. If N ¸ (1/ m) £ (4ln(2/d)/e2) then:
Pr[  P/Pr(E)  1  > e ] < d
Only for E in DNF
In PTIME
Some form of simulation is needed in probabilistic databases, to cope with the #Phardness bottleneck
Problem: al.05]
[Nepal&Ramakrishna:1999,Fagin,Lotem,Naor:2001; 2003]
2. The Threshold AlgorithmSELECT *
FROM Rp, Sp, TpWHERE Rp.A = Sp.B and Sp.C = Tp.D
Have subplans for Rp, Sp, Tp returning tuples sorted by their probabilities x, y, z
Score combination:f(x, y, z) = xyz
How do we compute the topk matching records ?
H007: al.05]f(x3,?, z2)
H5555:f(x2,y1,z3)
H???:f(?, ?, ?)
Total score:
H999:f(?, y3, ?)
H32:f(x1,?, ?)
[Fagin,Lotem,Naor:2001; 2003]
0 · ? · y3
“No Random Access” (NRA)
Rp=
Sp=
Tp=
1 ¸ y1¸ y2¸ …
1 ¸ x1¸ x2¸ …
1 ¸ z1¸ z2¸ …
[Fagin,Lotem,Naor:2001; 2003] al.05]
Termination condition:
Threshold score
H???:f(?, ?, ?)
k objects:
Guaranteed to be topk
The algorithm is “instance optimal”strongest form of optimality
[Theobald,Weikum&Schenkel:2004]
[Michel, Triantafillou&Weikum:2005]
[Gravano et al.:2001] al.05]
Approximate String JoinsProblem:
SELECT *
FROM R, SWHERE R.A ~ S.B
Simplification for this tutorial: A ~ B means “A, B have at least k qgrams in common”
[Gravano et al.:2001] al.05]
Definition of qgrams
John_Smith
String:
Set of 3grams:
##J #Jo Joh ohn hn_ n_S _Sm Smi mit ith th# h##
[Gravano et al.:2001] al.05]
SELECT *
FROM R, SWHERE R.A ~ S.B
Naïve solution,using UDF(user defined function)
SELECT *
FROM R, SWHERE common_grams(R.A, S.B) ¸ k
[Gravano et al.:2001] al.05]
SELECT *
FROM R, SWHERE R.A ~ S.B
Solution usingthe Qgram Index
SELECT R.*, S.*
FROM R, RAQ, S, SBQWHERE R.Key = RAQ.Key and S.Key=SBQ.Key and RAQ.G = RBQ.GGROUP BY RAQ.Key, RBQ.KeyHAVING count(*) ¸ k
A wide range of disparate techniques
Needed: unified framework for efficient query evaluation in probabilistic databases
Imprecisions in data:
Probabilistic databases
The Goal:
The Challenge
[Domingos&Richardson:04,Sarma,Benjelloun,Halevy,Widom:2005] al.05]
Challenge 1: Specification FrameworksFeatures to have:
Complexity
Exact algorithm: #Pcomplete in simple cases
Challenge: characterize the complexity of approximation algorithms
Implementations:
Questions ?