Foundations of probabilistic answers to queries
1 / 138

Foundations of Probabilistic Answers to Queries - PowerPoint PPT Presentation

  • Updated On :

Foundations of Probabilistic Answers to Queries. Dan Suciu and Nilesh Dalvi University of Washington. Databases Today are Deterministic. An item either is in the database or is not A tuple either is in the query answer or is not This applies to all variety of data models:

Related searches for Foundations of Probabilistic Answers to Queries

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Foundations of Probabilistic Answers to Queries' - Samuel

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Foundations of probabilistic answers to queries l.jpg

Foundations of Probabilistic Answers to Queries

Dan Suciu and Nilesh Dalvi

University of Washington

Databases today are deterministic l.jpg
Databases Today are Deterministic

  • An item either is in the database or is not

  • A tuple either is in the query answer or is not

  • This applies to all variety of data models:

    • Relational, E/R, NF2, hierarchical, XML, …

What is a probabilistic database l.jpg
What is a Probabilistic Database ?

  • “An item belongs to the database” is a probabilistic event

  • “A tuple is an answer to the query” is a probabilistic event

  • Can be extended to all data models; we discuss only probabilistic relational data

Two types of probabilistic data l.jpg
Two Types of Probabilistic Data

  • Database is deterministicQuery answers are probabilistic

  • Database is probabilisticQuery answers are probabilistic

Long history l.jpg
Long History

Probabilistic relational databases have been studied from the late 80’s until today:

  • Cavallo&Pitarelli:1987

  • Barbara,Garcia-Molina, Porter:1992

  • Lakshmanan,Leone,Ross&Subrahmanian:1997

  • Fuhr&Roellke:1997

  • Dalvi&S:2004

  • Widom:2005

So why now l.jpg
So, Why Now ?

Application pull:

  • The need to manage imprecisions in data

    Technology push:

  • Advances in query processing techniques

The tutorial is built on these two themes

Application pull l.jpg
Application Pull

Need to manage imprecisions in data

  • Many types: non-matching data values, imprecise queries, inconsistent data, misaligned schemas, etc, etc

    The quest to manage imprecisions = major driving force in the database community

  • Ultimate cause for many research areas: data mining, semistructured data, schema matching, nearest neighbor

Slide8 l.jpg

Theme 1:

A large class of imprecisions in datacan be modeled with probabilities

Technology push l.jpg
Technology Push

Processing probabilistic data is fundamentally more complex than other data models

  • Some previous approaches sidestepped complexity

    There exists a rich collection of powerful, non-trivial techniques and results, some old, some very recent, that could lead to practical management techniques for probabilistic databases.

Slide10 l.jpg

Theme 2:

Identify the source of complexity,present snapshots of non-trivial results,set an agenda for future research.

Some notes on the tutorial l.jpg
Some Notes on the Tutorial

There is a huge amount of related work:

probabilistic db, top-k answers, KR, probabilistic reasoning, random graphs, etc, etc.

We left out many references

All references used are available in separate document

Tutorial available at:

Requires TexPoint to view

Overview l.jpg

Part I: Applications: Managing Imprecisions

Part II: A Probabilistic Data Semantics

Part III: Representation Formalisms

Part IV: Theoretical foundations

Part V: Algorithms, Implementation Techniques

Summary, Challenges, Conclusions


Part i l.jpg

Part I

Applications: Managing Imprecisions

Outline l.jpg

  • Ranking query answers

  • Record linkage

  • Quality in data integration

  • Inconsistent data

  • Information disclosure

1 ranking query answers l.jpg
1. Ranking Query Answers

Database is deterministic

The query returns a ranked list of tuples

  • User interested in top-k answers.

The empty answers problem l.jpg

Query is overspecified: no answers

Example: try to buy a house in Seattle…

[Agrawal,Chaudhuri,Das,Gionis 2003]

The Empty Answers Problem


FROM HousesWHERE bedrooms = 4 AND style = ‘craftsman’ AND district = ‘View Ridge’ AND price < 400000

… good luck !

Today users give up and move to Baltimore

Slide17 l.jpg


Compute a similarity score between a tuple and the query

Q = (v1, …, vm)

Query is a vector:

T = (u1, …, um)

Tuple is a vector:

[Agrawal,Chaudhuri,Das,Gionis 2003]



Rank tuples by their TF/IDF similarity to the query Q

Includes partial matches

Similarity predicates in sql l.jpg


Similarity Predicates in SQL

Beyond a single table: “Find the good deals in a neighborhood !”


FROM Houses xWHERE x.bedrooms ~ 4 AND ~ ‘craftsman’ AND x.price ~ 600k AND NOT EXISTS (SELECT *

FROM Houses y WHERE x.district = y.district AND x.ID != y.ID AND y.bedrooms ~ 4 AND ~ ‘craftsman’ AND y.price ~ 600k

Users specify similarity predicates with ~

System combines atomic similarities using probabilities

Types of similarity predicates l.jpg
Types of Similarity Predicates

  • String edit distances:

    • Levenstein distance, Q-gram distances

  • TF/IDF scores

  • Ontology distance / semantic similarity:

    • Wordnet

  • Phonetic similarity:



Keyword searches in databases l.jpg

[Hristidis&Papakonstantinou’2002,Bhalotia et al.2002]

Keyword Searches in Databases


  • Users want to search via keywords

  • Do not know the schema


  • Matching objects may be scattered across physical tables due to normalization; need on the fly joins

  • Score of a tuple = number of joins, plus “prestige” based on indegree

Slide21 l.jpg

























Join sequences(tuple trees):

Q = ‘Abiteboul’ and ‘Widom’







More ranking user preferences l.jpg


More Ranking: User Preferences

Applications: personalized search engines, shopping agents, logical user profiles, “soft catalogs”

Two approaches:

  • Qualitative ) Pareto semantics (deterministic)

  • Quantitative ) alter the query ranking

Summary on ranking query answers l.jpg
Summary on Ranking Query Answers

Types of imprecision addressed:

Data is precise, query answers are imprecise:

  • User has limited understanding of the data

  • User has limited understanding of the schema

  • User has personal preferences

    Probabilistic approach would…

  • Principled semantics for complex queries

  • Integrate well with other types of imprecision

2 record linkage l.jpg

[Cohen: Tutorial]

2. Record Linkage

Determine if two data recordsdescribe same object


  • Join/merge two relations

  • Remove duplicates from a single relation

  • Validate incoming tuples against a reference

Fellegi sunter model l.jpg

[Cohen: Tutorial; Fellegi&Sunder:1969]

Fellegi-Sunter Model

A probabilistic model/framework

  • Given two sets of records A, B:

    Goal: partition A ´ B into:

  • Match

  • Uncertain

  • Non-match

{a1, a2, a3, a4, a5, a6}

A =

{b1, b2, b3, b4, b5}

B =

Non fellegi sunter approaches l.jpg

[Cohen: Tutorial]

Non-Fellegi Sunter Approaches

Deterministic linkage

  • Normalize records, then test equality

    • E.g. for addresses

    • Very fast when it works

  • Hand-coded rules for an “acceptable match”

    • E.g. “same SSN”;or “same last name AND same DOB”

    • Difficult to tune

Application data cleaning etl l.jpg
Application: Data Cleaning, ETL

  • Merge/purge for large databases, by sorting and clustering

  • Use of dimensional hierarchies in data warehouses and exploit co-occurrences

  • Novel similarity functions that are amenable to indexing

  • Declarative language to combine cleaning tasks




[Galhardas et al.:2001]

Application data integration l.jpg


Application: Data Integration


  • All attributes in in all tables are of type text

  • Datalog queries with two kinds of predicates:

    • Relational predicates

    • Similarity predicates X ~ Y

Matches two sets on the fly, butnot really a “record linkage” application.

Whirl l.jpg




Example 1:

Q1(*) :- P(Company1,Industry1), Q(Company2,Website), R(Industry2, Analysis), Company1 ~ Company2, Industry1 ~ Industry2

Score of an answer tuple = product of similarities

Whirl30 l.jpg



Example 2 (with projection):

Q2(Website) :- P(Company1,Industry1), Q(Company2,Website), R(Industry2, Analysis), Company1 ~ Company2, Industry1 ~ Industry2

Support(t) = set of tuples supporting the answer t

Dependson queryplan !!

score(t) = 1 - Õs 2 Support(t) (1-score(s))

Summary on record linkage l.jpg
Summary on Record Linkage

Types of imprecision addressed:

Same entity represented in different ways

  • Misspellings, lack of canonical representation, etc.

    A probability model would…

  • Allow system to use the match probabilities: cheaper, on-the-fly

  • But need to model complex probabilistic correlations: is one set a reference set ? how many duplicates are expected ?

3 quality in data integration l.jpg


3. Quality in Data Integration

Use of probabilistic information to reason about soundness, completeness, and overlap of sources


  • Order access to information sources

  • Compute confidence scores for the answers

Slide33 l.jpg

Global Historical Climatology Network[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]

Integrates climatic data from:

6000 temperature stations

7500 precipitation stations

2000 pressure stations

Starting with 1697 (!!)


Soundness of a data source: what fraction of items are correct

Completeness data source: what fractions of items it actually contains

Slide34 l.jpg

Local as view:[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]


V3(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “US”), y ³ 1800

. . .


. . .


Global schema:




V1(s, lat, lon, c) ¬Station(s, lat, lon c)


V2(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “Canada”), y ³ 1900

Slide35 l.jpg

Next, declare soundness and complete[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]


V2(s, y, m, v) ¬Temperature(s, y, m, v),Station(s, lat, lon, “Canada”), y ³ 1900



Soundness(V2) ³ 0.7Completneess(V2) ³ 0.4


Slide36 l.jpg


Goal 2: soundness ! query confidence

Q(y, v) :- Temperature(s, y, m, v), Station(s, lat, lon, “US”), y ³ 1950, y · 1955, lat ¸ 48, lat · 49



Goal 1: completeness ! order source accesses





. . .

Summary quality in data integration l.jpg
Summary:[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01] Quality in Data Integration

Types of imprecision addressed

Overlapping, inconsistent, incomplete data sources

  • Data is probabilistic

  • Query answers are probabilistic

    They use already a probabilistic model

  • Needed: complex probabilistic spaces. E.g. a tuple t in V1 has 60% probability of also being in V2

  • Query processing still in infancy

4 inconsistent data l.jpg


4. Inconsistent Data

Goal: consistent query answers from inconsistent databases


  • Integration of autonomous data sources

  • Un-enforced integrity constraints

  • Temporary inconsistencies

The repair semantics l.jpg


The Repair Semantics

Considerall “repairs”


Find people in State=WA Þ Dalvi

Find people in State=MA Þ;

Hi precision, but low recall

Alternative probabilistic semantics l.jpg
Alternative Probabilistic Semantics[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]

State=WA Þ Dalvi, Balazinska(0.5), Miklau(0.5)

State=MA Þ Balazinska(0.5), Miklau(0.5)

Lower precision, but better recall

Summary inconsistent data l.jpg
Summary:[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]Inconsistent Data

Types of imprecision addressed:

  • Data from different sources is contradictory

  • Data is uncertain, hence, arguably, probabilistic

  • Query answers are probabilistic

    A probabilistic would…

  • Give better recall !

  • Needs to support disjoint tuple events

5 information disclosure l.jpg
5. Information Disclosure[Florescu,Koller,Levy97;Chang,GarciaMolina00;Mendelzon,Mihaila01]


  • Disclose some information (V) while protecting private or sensitive data S


  • Privacy preserving data mining

  • Data exchange

  • K-anonymous data

V=anonymized transactions

V=standard view(s)

V=k-anonymous table

S = some atomic fact that is private

Slide43 l.jpg

[Evfimievski,Gehrke,Srikant:03; Miklau&S:04;Miklau,Dalvi&S:05]


= a priori probability of S

Pr(S | V)

= a posteriori probability of S

Information disclosure l.jpg

[Evfimievski,Gehrke,Srikant:03; Miklau&S:04;Miklau,Dalvi&S:05]

Information Disclosure

  • If r1 < r2, a r1, r2 privacy breach:

  • Perfect security:

  • Practical security:

Pr(S) £r1 and Pr(S | V) ³r2

Pr(S) = Pr(S | V)

limdomain size ®¥ Pr(S | V) = 0

Database sizeremains fixed

Summary information disclosure l.jpg
Summary: Miklau&S:04;Miklau,Dalvi&S:05]Information Disclosure

Is this a type of imprecision in data ?

  • Yes: it’s the adversary’s uncertainty about the private data.

  • The only type of imprecision that is good


  • Probabilistic methods: long history [Shannon’49]

  • Definitely need conditional probabilities

Summary information disclosure46 l.jpg
Summary: Miklau&S:04;Miklau,Dalvi&S:05]Information Disclosure

Important fundamental duality:

  • Query answering: want Probability . 1

  • Information disclosure: want Probability & 0

They share the same fundamental concepts and techniques

Summary information disclosure47 l.jpg
Summary: Miklau&S:04;Miklau,Dalvi&S:05]Information Disclosure

What is required from the probabilistic model

  • Don’t know the possible instances

  • Express the adversary’s knowledge:

    • Cardinalities:

    • Correlations between values:

  • Compute conditional probabilities

Size(Employee) ' 1000


6 other applications l.jpg
6. Other Applications Miklau&S:04;Miklau,Dalvi&S:05]


  • Data lineage + accuracy: Trio

  • Sensor data

  • Personal information management

  • Using statistics to answer queries

[Deshpande, Guestrin,Madden:2004]

Semex [Dong&Halevy:2005, Dong,Halevy,Madhavan:2005]Heystack [Karger et al. 2003], Magnet [Sinha&Karger:2005]


Summary on part i applications l.jpg
Summary on Part I: Applications Miklau&S:04;Miklau,Dalvi&S:05]

Common in these applications:

  • Data in database and/or in query answer is uncertain, ranked; sometimes probabilistic

    Need for common probabilistic model:

  • Main benefit: uniform approach to imprecision

  • Other benefits:

    • Handle complex queries (instead of single table TF/IDF)

    • Cheaper solutions (on-the-fly record linkage)

    • Better recall (constraint violations)

Part ii l.jpg

Part II Miklau&S:04;Miklau,Dalvi&S:05]

A Probabilistic Data Semantics

Outline51 l.jpg
Outline Miklau&S:04;Miklau,Dalvi&S:05]

  • The possible worlds model

  • Query semantics

Possible worlds semantics l.jpg
Possible Worlds Semantics Miklau&S:04;Miklau,Dalvi&S:05]

Attribute domains:

int, char(30), varchar(55), datetime

# values: 232, 2120, 2440, 264

Relational schema:

Employee(name:varchar(55), dob:datetime, salary:int)

# of tuples: 2440£ 264£ 223

# of instances: 22440£ 264£ 223

Database schema:

Employee(. . .), Projects( . . . ), Groups( . . .), WorksFor( . . .)

# of instances: N (= BIG but finite)

The definition l.jpg

Definition Miklau&S:04;Miklau,Dalvi&S:05]A probabilistic database Ipis a probability distribution on INST

Pr : INST ! [0,1]

s.t. åi=1,N Pr(Ii) = 1

The Definition

The set of all possible database instances:

INST = {I1, I2, I3, . . ., IN}

will use Pr or Ipinterchangeably

Definition A possible world is I s.t. Pr(I) > 0

Example l.jpg
Example Miklau&S:04;Miklau,Dalvi&S:05]

Ip =

Pr(I2) = 1/12

Pr(I1) = 1/3

Pr(I4) = 1/12

Pr(I3) = 1/2

Possible worlds = {I1, I2, I3, I4}

Tuples as events l.jpg
Tuples as Events Miklau&S:04;Miklau,Dalvi&S:05]

One tuple t ) event t 2 I

Pr(t) = åI: t 2 I Pr(I)

Two tuples t1, t2) event t12 I Æ t22 I

Pr(t1 t2) = åI: t12 I Æ t22 I Pr(I)

Tuple correlation l.jpg
Tuple Correlation Miklau&S:04;Miklau,Dalvi&S:05]

Pr(t1 t2) = 0



Pr(t1 t2) < Pr(t1) Pr(t2)

Negatively correlated


Pr(t1 t2) = Pr(t1) Pr(t2)



Pr(t1 t2) > Pr(t1) Pr(t2)

Positively correlated


Pr(t1 t2) = Pr(t1) = Pr(t2)



Example57 l.jpg
Example Miklau&S:04;Miklau,Dalvi&S:05]

Ip =




Pr(I2) = 1/12

Pr(I1) = 1/3



Pr(I4) = 1/12

Pr(I3) = 1/2

Query semantics l.jpg
Query Semantics Miklau&S:04;Miklau,Dalvi&S:05]

Given a query Q and a probabilistic database Ip,what is the meaning of Q(Ip) ?

Query semantics59 l.jpg
Query Semantics Miklau&S:04;Miklau,Dalvi&S:05]

Semantics 1: Possible Answers

A probability distributions on sets of tuples

8 A. Pr(Q = A) = åI 2 INST. Q(I) = A Pr(I)

Semantics 2: Possible Tuples

A probability function on tuples

8 t. Pr(t 2 Q) = åI 2 INST. t2 Q(I) Pr(I)

Example query semantics l.jpg
Example: Query Semantics Miklau&S:04;Miklau,Dalvi&S:05]



FROM Purchasep x, Purchasep y

WHERE = 'John' and x.product = y.product and = 'Sue'

Pr(I1) = 1/3

Possible answers semantics:

Pr(I2) = 1/12

Pr(I3) = 1/2

Possible tuples semantics:

Pr(I4) = 1/12

Special case l.jpg

INST = Miklau&S:04;Miklau,Dalvi&S:05]P(TUP)N = 2M

TUP = {t1, t2, …, tM}

= all tuples

pr : TUP ! [0,1]

No restrictions

Special Case

Tuple independent probabilistic database

Pr(I) = Õt 2 I pr(t) £Õt Ï I (1-pr(t))

Tuple prob possible worlds l.jpg

I Miklau&S:04;Miklau,Dalvi&S:05]1
















Tuple Prob. ) Possible Worlds

E[ size(Ip) ] = 2.3 tuples

å = 1

J =

Ip =


Tuple prob query evaluation l.jpg
Tuple Prob. Miklau&S:04;Miklau,Dalvi&S:05]) Query Evaluation


FROM Person x, Purchase y

WHERE x.Name = y.Customer and y.Product = ‘Gadget’

p1( )


1- (1- )£(1 - )

p2( )


p3 q7

Summary of part ii l.jpg
Summary of Part II Miklau&S:04;Miklau,Dalvi&S:05]

Possible Worlds Semantics

  • Very powerful model: any tuple correlations

  • Needs separate representation formalism

Summary of part ii65 l.jpg
Summary of Part II Miklau&S:04;Miklau,Dalvi&S:05]

Query semantics

  • Very powerful: every SQLquery has semantics

  • Very intuitive: from standard semantics

  • Two variations, both appear in the literature

Summary of part ii66 l.jpg
Summary of Part II Miklau&S:04;Miklau,Dalvi&S:05]

Possible answers semantics

  • Precise

  • Can be used to compose queries

  • Difficult user interface

    Possible tuples semantics

  • Less precise, but simple; sufficient for most apps

  • Cannot be used to compose queries

  • Simple user interface

After the break l.jpg
After the Break Miklau&S:04;Miklau,Dalvi&S:05]

Part III: Representation Formalisms

Part IV: Foundations

Part V: Algorithms, implementation techniques

Conclusions and Challenges

Part iii l.jpg

Part III Miklau&S:04;Miklau,Dalvi&S:05]

Representation Formalisms

Representation formalisms l.jpg
Representation Formalisms Miklau&S:04;Miklau,Dalvi&S:05]

ProblemNeed a good representation formalism

  • Will be interpreted as possible worlds

  • Several formalisms exists, but no winner

Main open problem in probabilistic db

Evaluation of formalisms l.jpg
Evaluation of Formalisms Miklau&S:04;Miklau,Dalvi&S:05]

  • What possible worlds can it represent ?

  • What probability distributions on worlds ?

  • Is it closed under query application ?

Outline71 l.jpg
Outline Miklau&S:04;Miklau,Dalvi&S:05]

A complete formalism:

  • Intensional Databases

    Incomplete formalisms:

  • Various expressibility/complexity tradeoffs

Intensional database l.jpg

[Fuhr&Roellke:1997] Miklau&S:04;Miklau,Dalvi&S:05]

Intensional Database

Atomic event ids

e1, e2, e3, …


p1, p2, p3, … 2 [0,1]

Event expressions: Æ, Ç, :

e3Æ (e5Ç: e2)

Intensional probabilistic database J: each tuple t has an event attribute t.E

Intensional db possible worlds l.jpg

e Miklau&S:04;Miklau,Dalvi&S:05]1e2e3=










p1(1-p2) p3

(1-p1)p2 p3


Intensional DB ) Possible Worlds

J =



Possible worlds intensional db l.jpg

E Miklau&S:04;Miklau,Dalvi&S:05]1 = e1E2 = :e1Æ e2

E3 = :e1Æ:e2Æ e3

E4 = :e1Æ:e2Æ:e3Æ e4

Pr(e1) = p1Pr(e2) = p2/(1-p1)

Pr(e3) = p3/(1-p1-p2)

Pr(e4) = p4 /(1-p1-p2-p3)

“Prefix code”

Possible Worlds ) Intensional DB




J =



Intesional DBs are complete

Closure under operators l.jpg

[Fuhr&Roellke:1997] Miklau&S:04;Miklau,Dalvi&S:05]

Closure Under Operators





One still needs to compute probability of event expression

Summary on intensional databases l.jpg
Summary on Intensional Databases Miklau&S:04;Miklau,Dalvi&S:05]

Event expression for each tuple

  • Possible worlds: any subset

  • Probability distribution: any

    Complete (in some sense) … but impractical

    Important abstraction: consider restrictions

    Related to c-tables


Restricted formalisms l.jpg
Restricted Formalisms Miklau&S:04;Miklau,Dalvi&S:05]

Explicit tuples

  • Have a tuple template for every tuple that may appear in a possible world

    Implicit tuples

  • Specify tuples indirectly, e.g. by indicating how many there are

Explicit tuples l.jpg

0 Miklau&S:04;Miklau,Dalvi&S:05]



Explicit Tuples

Independent tuples

Atomic, distinct.

May use TIDs.

tuple = event

E[ size(Customer) ] = 1.6 tuples

Application 1 similarity predicates l.jpg
Application 1: Similarity Predicates Miklau&S:04;Miklau,Dalvi&S:05]

Step 1:evaluate ~ predicates


FROM Person x, Purchase y

WHERE x.Name = y.Cust and y.Product = ‘Gadget’ and x.profession ~ ‘scientist’ and y.category ~ ‘music’

Application 1 similarity predicates80 l.jpg
Application 1: Similarity Predicates Miklau&S:04;Miklau,Dalvi&S:05]

Step 1:evaluate ~ predicates


FROM Personp x, Purchasep y

WHERE x.Name = y.Cust and y.Product = ‘Gadget’ and x.profession ~ ‘scientist’ and y.category ~ ‘music’

Step 2:evaluate restof query

Explicit tuples81 l.jpg
Explicit Tuples Miklau&S:04;Miklau,Dalvi&S:05]

Independent/disjoint tuples

Independent events: e1, e2, …, ei, …

Split ei into disjoint “shares” ei = ei1Ç ei2Ç ei3Ç…

e34, e37) disjoint events

e37, e57) independent events



Application 2 inconsistent data l.jpg
Application 2: Inconsistent Data Miklau&S:04;Miklau,Dalvi&S:05]

SELECT DISTINCT ProductFROM CustomerWHERE City = ‘Seattle’

Step 1:resolve violations

Name ! City (violated)

Application 2 inconsistent data83 l.jpg
Application 2: Inconsistent Data Miklau&S:04;Miklau,Dalvi&S:05]

SELECT DISTINCT ProductFROM CustomerpWHERE City = ‘Seattle’




Step 2:evaluate query

Step 1:resolve violations

E[ size(Customer) ] = 2 tuples

Inaccurate attribute values l.jpg

[Barbara et al.92, Lakshmanan et al.97,Ross et al.05;Widom05]

Inaccurate Attribute Values

Inaccurate attributes

Disjoint and/or independentevents

Summary on explicit tuples l.jpg
Summary on Explicit Tuples al.05;Widom05]

Independent or disjoint/independent tuples

  • Possible worlds: subsets

  • Probability distribution: restricted

  • Closure: no

    In KR:

  • Bayesian networks: disjoint tuples

  • Probabilistic relational models: correlated tuples


Implicit tuples l.jpg

[Mendelzon&Mihaila:2001,Widom:2005,Miklau&S04,Dalvi et al.05]

Implicit Tuples

“There are other, unknown tuples out there”

Covers 10%


Completeness =10%

30 other tuples

Implicit tuples87 l.jpg

[Miklau,Dalvi&S:2005,Dalvi&S:2005] al.05]

Implicit Tuples

Statistics based:


Semantics 1:size(Employee)=C

C tuples(e.g. C = 30)

Semantics 2:E[size(Employee)]=C

We go with #2: the expected size is C

Implicit possible tuples l.jpg

[Miklau,Dalvi&S:2005,Dalvi&S:2005] al.05]

Implicit Possible Tuples

Binomial distribution

Employee(name, dept, phone)

n1 = | Dname|

n2 = | Ddept|

n3 = | Dphone|

8 t. Pr(t) = C / (n1 n2 n3)

E[ Size(Employee) ] = C

Application 3 information leakage l.jpg

[Miklau,Dalvi&S:2005] al.05]

Pr(name,dept,phone) = C / (n1 n2 n3)

Application 3: Information Leakage

S :- Employee(“Mary”, -, 5551234)

Pr(S) @ C/n1n3

V1 :- Employee(“Mary”, “Sales”, -)

Pr(S | V1) @ 1/ n3

Pr(SV1) @ C/n1n2n3Pr(V1) @ C/n1n2

Practical secrecy

V2 :- Employee(-, “Sales”, 5551234)

Pr(S | V1V2) @ 1

Pr(SV1V2) @ C/n1n2n3Pr(V1 V2) @ C/n1n2 n3


Summary on implicit tuples l.jpg
Summary on Implicit Tuples al.05]

Given by expected cardinality

  • Possible worlds: any

  • Probability distribution: binomial

    May be used in conjunction with other formalisms

  • Entropy maximization

    Conditional probabilities become important


Summary on part iii representation formalism l.jpg
Summary on Part III: Representation Formalism al.05]

  • Intensional databases:

    • Complete (in some sense)

    • Impractical, but…

    • …important practical restrictions

  • Incomplete formalisms:

    • Explicit tuples

    • Implicit tuples

  • We have not discussed query processing yet

Part iv l.jpg

Part IV al.05]


Outline93 l.jpg
Outline al.05]

  • Probability of boolean expressions

  • Query probability

  • Random graphs

Probability of boolean expressions l.jpg

Needed for query processing al.05]

Probability of Boolean Expressions

E = X1X3Ç X1X4Ç X2X5 Ç X2X6

Randomly make each variable true with the following probabilities

Pr(X1) = p1, Pr(X2) = p2, . . . . . , Pr(X6) = p6

What is Pr(E) ???

Answer: re-group cleverly

E = X1 (X3Ç X4 ) Ç X2 (X5 Ç X6)

Pr(E)=1 - (1-p1(1-(1-p3)(1-p4))) (1-p2(1-(1-p5)(1-p6)))

Slide95 l.jpg

Now let’s try this: al.05]

E = X1X2Ç X1X3Ç X2X3

No clever grouping seems possible.

Brute force:

Pr(E)=(1-p1)p2p3 + p1(1-p2)p3 + p1p2(1-p3) + p1p2p3

Seems inefficient in general…

Complexity of boolean expression probability l.jpg

[Valiant:1979] al.05]

Complexity of Boolean Expression Probability

Theorem [Valiant:1979]For a boolean expression E, computing Pr(E) is #P-complete

NP = class of problems of the form “is there a witness ?” SAT

#P = class of problems of the form “how many witnesses ?” #SAT

The decision problem for 2CNF is in PTIMEThe counting problem for 2CNF is #P-complete

Summary on boolean expression probability l.jpg
Summary on al.05]Boolean Expression Probability

  • #P-complete

  • It’s hard even in simple cases: 2DNF

  • Can do Monte Carlo simulation (later)

Query complexity l.jpg
Query Complexity al.05]

Data complexity of a query Q:

  • Compute Q(Ip), for probabilistic database Ip

    Simplest scenario only:

  • Possible tuples semantics for Q

  • Independent tuples for Ip

Extensional query evaluation l.jpg

[Fuhr&Roellke:1997,Dalvi&S:2004] al.05]

or: p1 + p2 + …

Extensional Query Evaluation

Relational ops compute probabilities





Data complexity: PTIME

Slide100 l.jpg

£ al.05]






FROM Personp x, Purchasep y

WHERE x.Name = y.Cust and y.Product = ‘Gadget’

Wrong !


Depends on plan !!!

Query complexity101 l.jpg

[Dalvi&S:2004] al.05]

Query Complexity

Sometimes @ correct extensional plan

Data complexityis #P complete

Qbad :- R(x), S(x,y), T(y)

  • Theorem The following are equivalent

  • Q has PTIME data complexity

  • Q admits an extensional plan (and one finds it in PTIME)

  • Q does not have Qbad as a subquery

Summary on query complexity l.jpg
Summary on Query Complexity al.05]

Extensional query evaluation:

  • Very popular

    • generalized to “strategies”

  • However, result depends on query plan !

    General query complexity

  • #P complete (not surprising, given #SAT)

  • Already #P hard for very simple query (Qbad)

[Lakshmanan et al.1997]

Probabilistic database have high query complexity

Random graphs l.jpg

Random graph G al.05]p

Graph G:




Edges hidinghere









Random Graphs



D={1,2, …, n}


pr(t1) = … pr(tM) = p

Gp = tuple-independent

What is limn!1 Q(Gp)

Boolean query Q

Fagin s 0 1 law l.jpg

[Fagin:1976] al.05]

Fagin’s 0/1 Law

Let the tuple probability be p = 1/2

Theorem [Fagin:1976,Glebskii et al.1969]

For every sentence Q in First Order Logic, limn!1 Q(Gp) exists and is either 0 or 1


Erdos and reny s random graphs l.jpg

[Erdos&Reny:1959] al.05]

Erdos and Reny’s Random Graphs

Now let p = p(n) be a function of n

Theorem [Erdos&Ren]y:1959]

For any monotone Q, 9 a threshold function t(n) s.t.: if p(n) ¿ t(n) then limn!1Q(Gp)=0 if p(n) À t(n) then limn!1Q(Gp)=1

The evoluation of random graphs l.jpg

Remark: C(n) = E[ Size(G) ] al.05]' n2p(n)

The expected size C(n) “grows” from 0 to n2.How does the random graph evolve ?

[Erdos&Reny:1959; Spencer:2001]

The Evoluation of Random Graphs

The tuple probability p(n) “grows” from 0 to 1.How does the random graph evolve ?



The void l.jpg

[Spencer:2001] al.05]

The Void

p(n) ¿ 1/n2

C(n) ¿ 1

The graph is empty

0/1 Law holds

On the k th day l.jpg

[Spencer:2001] al.05]

On the k’th Day

1/n1+1/(k-1)¿ p(n) ¿ 1/n1+1/k

n1-1/(k-1)¿ C(n) ¿ n1-1/k

0/1 Law holds

The graph is disconnected

On day w l.jpg

[Spencer:2001] al.05]

On Day w

n1-e¿ C(n) ¿ n, 8e > 0

1/n1+e¿ p(n) ¿ 1/n, 8e > 0

0/1 Law holds

The graph is disconnected

Past the double jump 1 n l.jpg

[Spencer:2001] al.05]

Past the Double Jump (1/n)

1/n ¿ p(n) ¿ ln(n)/n

n ¿ C(n) ¿ n ln(n)

0/1 Law holds

The graph is disconnected

Past connectivity l.jpg

[Spencer:2001] al.05]

Past Connectivity

ln(n)/n ¿ p(n) ¿ 1/n1-e, 8e

n ln(n) ¿ C(n) ¿ n1+e, 8e

Strange logicof random graphs !!

0/1 Law holds

The graph is connected !

Big graphs l.jpg

0/1 Law holds al.05]

Fagin’s framework: a = 0

p(n) = O(1)

C(n) = O(n2)


Big Graphs

p(n) = 1/na, a2 (0,1)

C(n) = n2-a, a2 (0,1)

0/1 Law holds

a is irrational )

0/1 Law does not hold

a is rational )

Summary on random graphs l.jpg
Summary on Random Graphs al.05]

  • Very rich field

    • Over 700 references in [Bollobas:2001]

  • Fascinating theory

    • Evening reading: the evolution of random graphs (e.g. from [Spencer:2001])

Summary on random graphs114 l.jpg
Summary on Random Graphs al.05]

  • Fagin’s 0/1 Law: impractical probabilistic model

  • More recent 0/1 laws for p = 1/na [Spencer&Shelah, Lynch]

  • In practice: need precise formulas for Pr(Q(Ip))

    • Preliminary work [Dalvi,Miklau&S:04,Dalvi&S:05]

Part v l.jpg

Part V al.05]

Algorithms,Implementation Techniques

Query processing on a probabilistic database l.jpg
Query Processing on a Probabilistic Database al.05]

Top k answers

1. Simulation

ProbabilisticQuery engine

2. Extensional joins

SQL Query


3. Indexes

1 monte carlo simulation l.jpg

[Karp,Luby&Madras:1989] al.05]

1. Monte Carlo Simulation


E = X1X2Ç X1X3Ç X2X3



Cnt à 0

repeat N times

randomly choose X1, X2, X32 {0,1}if E(X1, X2, X3) = 1 then Cnt = Cnt+1

P = Cnt/N

return P /* ' Pr(E) */


May be very big


Theorem. If N ¸ (1/ Pr(E)) £ (4ln(2/d)/e2) then:

Pr[ | P/Pr(E) - 1 | > e ] < d

Works for any E

Not in PTIME

Monte carlo simulation l.jpg

[Karp,Luby&Madras:1989] al.05]

Monte Carlo Simulation


E = C1Ç C2Ç . . . Ç Cm

Cnt à 0; S à Pr(C1) + … + Pr(Cm);

repeat N times randomly choose i 2 {1,2,…, m}, with prob. Pr(Ci) / S

randomly choose X1, …, Xn2 {0,1} s.t. Ci = 1if C1=0 and C2=0 and … and Ci-1 = 0 then Cnt = Cnt+1

P = Cnt/N * 1/

return P /* ' Pr(E) */

Now it’s better

Theorem. If N ¸ (1/ m) £ (4ln(2/d)/e2) then:

Pr[ | P/Pr(E) - 1 | > e ] < d

Only for E in DNF


Summary on monte carlo l.jpg
Summary on Monte Carlo al.05]

Some form of simulation is needed in probabilistic databases, to cope with the #P-hardness bottleneck

  • Naïve MC: works well when Prob is big

  • Improved MC: needed when Prob is small

2 the threshold algorithm l.jpg

Problem: al.05]

[Nepal&Ramakrishna:1999,Fagin,Lotem,Naor:2001; 2003]

2. The Threshold Algorithm


FROM Rp, Sp, TpWHERE Rp.A = Sp.B and Sp.C = Tp.D

Have subplans for Rp, Sp, Tp returning tuples sorted by their probabilities x, y, z

Score combination:f(x, y, z) = xyz

How do we compute the top-k matching records ?

Slide121 l.jpg

H007: al.05]f(x3,?, z2)


H???:f(?, ?, ?)

Total score:

H999:f(?, y3, ?)

H32:f(x1,?, ?)

[Fagin,Lotem,Naor:2001; 2003]

0 · ? · y3

“No Random Access” (NRA)




1 ¸ y1¸ y2¸ …

1 ¸ x1¸ x2¸ …

1 ¸ z1¸ z2¸ …

Slide122 l.jpg

[Fagin,Lotem,Naor:2001; 2003] al.05]

Termination condition:

Threshold score

H???:f(?, ?, ?)

k objects:

Guaranteed to be top-k

The algorithm is “instance optimal”strongest form of optimality

Summary on the threshold algorithm l.jpg
Summary on the Threshold Algorithm al.05]

  • Simple, intuitive, powerful

  • There are several variations: see paper

  • Extensions:

    • Use probabilistic methods to estimate the bounds more aggressively

    • Distributed environment


[Michel, Triantafillou&Weikum:2005]

Approximate string joins l.jpg

[Gravano et al.:2001] al.05]

Approximate String Joins




Simplification for this tutorial: A ~ B means “A, B have at least k q-grams in common”

Slide125 l.jpg

[Gravano et al.:2001] al.05]

Definition of q-grams



Set of 3-grams:

##J #Jo Joh ohn hn_ n_S _Sm Smi mit ith th# h##

Slide126 l.jpg

[Gravano et al.:2001] al.05]



Naïve solution,using UDF(user defined function)


FROM R, SWHERE common_grams(R.A, S.B) ¸ k

Slide127 l.jpg

[Gravano et al.:2001] al.05]

A q-gram index:



Slide128 l.jpg

[Gravano et al.:2001] al.05]



Solution usingthe Q-gram Index


FROM R, RAQ, S, SBQWHERE R.Key = RAQ.Key and S.Key=SBQ.Key and RAQ.G = RBQ.GGROUP BY RAQ.Key, RBQ.KeyHAVING count(*) ¸ k

Summary on part v algorithms l.jpg
Summary on Part V: al.05]Algorithms

A wide range of disparate techniques

  • Monte Carlo Simulations (also: MCMC)

  • Optimal aggregation algorithms (TA)

  • Efficient engineering techniques

    Needed: unified framework for efficient query evaluation in probabilistic databases

Conclusions and challenges ahead l.jpg

Conclusions and al.05]Challenges Ahead

Conclusions l.jpg
Conclusions al.05]

Imprecisions in data:

  • A wide variety of types have specialized management solutions

  • Probabilistic databases promise uniform framework, but require full complexity

Conclusions132 l.jpg
Conclusions al.05]

Probabilistic databases

  • Possible worlds semantics

    • Simple

    • Every query has well defined semantics

  • Need: expressive representation formalism

  • Need: efficient query processing techniques

Challenge 1 specification frameworks l.jpg
Challenge 1: al.05]Specification Frameworks

The Goal:

  • Design framework that is usable, expressive, efficient

    The Challenge

  • Tradeoff between expressibility and tractability

Challenge 1 specification frameworks134 l.jpg

[Domingos&Richardson:04,Sarma,Benjelloun,Halevy,Widom:2005] al.05]

Challenge 1: Specification Frameworks

Features to have:

  • Support probabilistic statements:

    • Simple: (Fred, Seattle, Gizmo) 2 Purchase has probability 60%

    • Complex: “Fred and Sue live in the same city” has probability 80%

  • Support tuple corrleations

    • “t1 and t2 are correlated positively 30%”

  • Statistics statements:

    • There are about 2000 tuples in Purchase

    • There are about 100 distinct Cities

    • Every customer buys about 4 products

Challenge 2 query evaluation l.jpg
Challenge 2: al.05]Query Evaluation


  • Old: = f(query-language)

  • New: = f(query-language, specification-language)

    Exact algorithm: #P-complete in simple cases

    Challenge: characterize the complexity of approximation algorithms

Challenge 2 query evaluation136 l.jpg
Challenge 2: al.05]Query Evaluation


  • Disparate techniques require unified framework

  • Simulation:

    • Client side or server side ?

    • How to schedule simulation steps ?

    • How to push simulation steps in relational operators ?

    • How to compute subplans extensionally, when possible ?

  • Top-k pruning:

    • How can we “push thresholds” down the query plan ?

Challenge 3 mapping imprecisions to probabilities l.jpg
Challenge 3: Mapping Imprecisions to Probabilities al.05]

  • One needs to put a number between 0 and 1 to an uncertain piece of data

    • This is highly nontrivial !

    • But consider the alternative: ad-hoc management of imprecisions at all stages

  • What is a principled approach to do this ?

  • How do we evaluate such mappings ?

The end p l.jpg

The End al.05]p

Questions ?