Creating and exploiting a web of semantic data
1 / 57

Creating and Exploiting a Web of Semantic Data - PowerPoint PPT Presentation

  • Uploaded on

Creating and Exploiting a Web of Semantic Data. Overview. Introduction Semantic Web 101 Recent Semantic Web trends Examples: DBpedia, Wikitology Conclusion. The Age of Big Data. Massive amounts of data is available today

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about ' Creating and Exploiting a Web of Semantic Data' - dewei

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript


  • Introduction

  • Semantic Web 101

  • Recent Semantic Web trends

  • Examples: DBpedia, Wikitology

  • Conclusion

The age of big data
The Age of Big Data

  • Massive amounts of data is available today

  • Advances inmany fields driven by availability of unstructured data, e.g., text, audio, images

  • Increasingly, large amounts of structured and semi-structured data is also online

  • Much of this available in the Semantic Web language RDF, fostering integration and interoperability

  • Such structured data is especially important for the sciences

Twenty years ago
Twenty years ago…

Tim Berners-Lee’s 1989 WWW proposal described a web of rela- tionships among named objects unifying many information management tasks

Capsule history

  • Guha’s MCF (~94)

  • XML+MCF=>RDF (~96)

  • RDF+OO=>RDFS (~99)

  • RDFS+KR=>DAML+OIL (00)

  • W3C’s SW activity (01)

  • W3C’s OWL (03)

  • SPARQL, RDFa (08)

  • Rules (09)

Ten years ago
Ten years ago ….

  • The W3C started developing standards for the Semantic Web

  • The vision, technology and use cases are still evolving

  • Moving from a web of documents to a web of data


4.5 billion integrated facts published on the Web as RDF Linked Open Data


Large collections of integrated facts published on the Web for many disciplines and domains

W3c s semantic web goal
W3C’s Semantic Web Goal

“The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”

-- Berners-Lee, Hendler and Lassila, The Semantic Web, Scientific American, 2001

Contrast with a non web approach
Contrast with a non-Web approach

  • The W3C Semantic Web approach is

  • Distributed

  • Open

  • Non-proprietary

  • Standards based

How can we share data on the web
How can we share data on the Web?

  • POX, Plain Old XML, is one approach, but it has deficiencies

  • The Semantic Web languages RDF and OWL offer a simpler and more abstract data model (a graph) that is better for integration

  • Its well defined semantics supports knowledge modeling and inference

  • Supported by a stable, funded standards organization, the World Wide Web Consortium

Simple rdf example
Simple RDF Example


“Intelligent Information Systemson the Web and in the Aether”


Note: “blank node”




[email protected]

“Tim Finin”

The rdf data model
The RDF Data Model

  • An RDF document is an unordered collection of statements, each with a subject, predicate and object

  • Such triples can be thought of as a labelled arc in a graph

  • Statements describe properties of resources

  • A resource is any object that can be referenced or denoted by a URI

  • Properties themselves are also resources (URIs)

  • Dereferencing a URI produces useful additional information, e.g., a definition or additional facts

Rdf is the first sw language
RDF is the first SW language


XML Encoding


Data Model

<rdf:RDF ……..>




Good for

human viewing

Good for



stmt(docInst, rdf_type, Document)

stmt(personInst, rdf_type, Person)

stmt(inroomInst, rdf_type, InRoom)

stmt(personInst, holding, docInst)

stmt(inroomInst, person, personInst)

RDF is a simple language for graph based representations

Good for storage and reasoning

Xml encoding for rdf
XML encoding for RDF

<rdf:RDF xmlns:rdf=""



<description about="">

<dc:title>Intelligent Information … and in the Aether</dc:Title>



<bib:Name>Tim Finin</bib:Name>

<bib:Email>[email protected]</bib:Email>

<bib:Aff resource="" />






“Intelligent Information Systemson the Web and in the Aether”





[email protected]

“Tim Finin”

N3 is a friendlier encoding
N3 is a friendlier encoding

@prefix rdf: .

@prefix dc: .

@prefix bib: .


dc:title "Intelligent ... and in the Aether" ;


[ bib:Name "Tim Finin";

bib:Email "[email protected]"

bib:Aff: "" ] .


“Intelligent Information Systemson the Web and in the Aether”





[email protected]

“Tim Finin”

Rdfs supports simple inferences
RDFS supports simple inferences

  • RDF Schema adds vocabulary for classes, properties & constraints

  • An RDF ontology plus some RDF statements may imply additional RDF statements (not possible in XML)

  • Note that this is part of the data model and not of the accessing or processing code.

  • @prefix rdfs: <http://www.....>.

  • @prefix : <genesis.n3>.

  • parent a rdf: property;

  • rdfs:domain person;

    • rdfs:range person.

    • mother rdfs:subProperty parent;

    • rdfs:domain woman;

    • rdfs:range person.

    • eve mother cain.

person a class.

woman subClass person.

mother a property.

eve a person;

a woman;

parent cain.

cain a person.

Owl adds further richness
OWL adds further richness

OWL adds richer representational vocabulary, e.g.

  • parentOf is the inverse of childOf

  • Every person has exactly one mother

  • Every person is a man or a woman but not both

  • A man is the equivalent of a person with a sex property with value “male”

    OWL is based on ‘description logic’ – a logic subset with efficient reasoners that are complete

  • Good algorithms for reasoning about descriptions

That was then this is now
That was then, this is now

  • 1996-2000: focus on RDF and data

  • 2000-2007: focus on OWL, developing ontologies, sophisticated reasoning

  • 2008-…: Integrating and exploiting large RDF data collections backed by lightweight ontologies

A linked data story
A Linked Data story

  • Wikipedia as a source of knowledge

    • Wikis are a great ways to collaborateon building up knowledge resources

  • Wikipedia as an ontology

    • Every Wikipedia page is a concept or object

  • Wikipedia as RDF data

    • Map this ontology into RDF

  • DBpedia as the lynchpin for Linked Data

    • Exploit its breadth of coverage to integrate things

Wikipedia as an ontology
Wikipedia as an ontology

  • Using Wikipedia as an ontology

    • each article (~3M) is an ontology concept or instance

    • terms linked via category system (~200k), infobox template use, inter-article links, infobox links

    • Article history contains metadata for trust, provenance, etc.

  • It’s a consensus ontology with broad coverage

  • Created and maintained by a diverse community for free!

  • Multilingual

  • Very current

  • Overall content quality is high

Wikipedia as an ontology1
Wikipedia as an ontology

  • Uncategorized and miscategorized articles

  • Many ‘administrative’ categories: articles needing revision; useless ones: 1949 births

  • Multiple infobox templates for the same class

  • Multiple infobox attribute names for same property

  • No datatypes or domains for infobox attribute values

  • etc.

Dbpedia wikipedia in rdf
Dbpedia : Wikipedia in RDF

  • A community effort to extractstructured information fromWikipedia and publish as RDFon the Web

  • Effort started in 2006 with EU funding

  • Data and software open sourced

  • DBpedia doesn’t extract information from Wikipedia’s text, but from the its structured information, e.g., links, categories, infoboxes

Dbpedia uses wp structured data
Dbpedia uses WP structured data

DBpedia extracts structured data from Wikipedia, especially from Infoboxes

Dbpedia ontology
Dbpedia ontology

  • Dbpedia 3.2 (Nov 2008) added a manually constructed ontology with

    • 170 classes in a subsumption hierarchy

    • 880K instances

    • 940 properties with domain and range

  • A partial, manual mapping was constructed from infobox attributes to these term

  • Current domain and range constraints are “loose”

  • Namespace:

Place 248,000

Person 214,000

Work 193,000

Species 90,000

Org. 76,000

Building 23,000


56 properties


50 properties


110 properties

PREFIX dbp: <>

PREFIX dbpo: <>

SELECT distinct ?Property ?Place

WHERE {dbp:Barack_Obama ?Property ?Place .

?Place rdf:type dbpo:Place .}

Looking at the rdf description
Looking at the RDF description

We find assertions equating DBpedia's object for Baltimore with those in other LOD datasets:


owl:sameAs census:us/md/counties/baltimore/baltimore;

owl:sameAs cyc:concept/Mx4rvVin-5wpEbGdrcN5Y29ycA;

owl:sameAs freebase:guid.9202a8c04000641f800000000004921a;

owl:sameAs geonames:4347778/ .

Since owl:sameAs is defined as an equivalence relation, the mapping works both ways

Four principles for linked data
Four principles for linked data

  • Use URIs to identify things that you expose to the Web as resources

  • Use HTTP URIs so that people can locate and look up (dereference) these things.

  • When someone looks up a URI, provide useful information

  • Include links to other, related URIs in the exposed data as a means of improving information discovery on the Web

-- Tim Berners-Lee, 2006

4 5 billion triples for free
4.5 billion triples for free

  • The full public LOD dataset has about 4.5 billion triples as of March 2009

  • Linking assertions are spotty, but probably include order 10M equivalences

  • Availability:

    • download the data in RDF

    • Query it via a public SPARQL servers

    • load it as an Amazon EC2 public dataset

    • Launch it and required software as an Amazon public AMI image


We’ve been exploring a different approach to derive an ontology from Wikipedia through a series of use cases:

  • Identifying user context in a collaboration system from documents viewed (2006)

  • Improve IR accuracy by adding Wikitology tags to documents (2007)

  • ACE: cross document co-reference resolution for named entities in text (2008)

  • TAC KBP: Knowledge Base population from text (2009)

  • Improve Web search engine by tagging documents and queries (2009)

Wikitology 2 0 2008
Wikitology 2.0 (2008)





Freebase KB




Human input & editing

Wikitology tagging
Wikitology tagging

  • Using Serif’s output, we produced an entity document for each entity.

    Included the entity’s name, nominal and pronominal mentions, APF type and subtype, and words in a window around the mentions

  • We tagged entity documents using Wiki-tology producing vectors of (1) terms and (2) categories for the entity

  • We used the vectors to compute features measuring entity pair similarity/dissimilarity

Wikitology entity document tags
Wikitology Entity Document & Tags

Wikitology article tag vector

Webster_Hubbell 1.000

Hubbell_Trading_Post National Historic Site 0.379

United_States_v._Hubbell 0.377

Hubbell_Center 0.226

Whitewater_controversy 0.222

Wikitology category tag vector

Clinton_administration_controversies 0.204

American_political_scandals 0.204

Living_people 0.201

1949_births 0.167

People_from_Arkansas 0.167

Arkansas_politicians 0.167

American_tax_evaders 0.167

Arkansas_lawyers 0.167


Type & subtype

Mention heads

Words surrounding


Wikitology entity document


<DOCNO>ABC19980430.1830.0091.LDC2000T44-E2 <DOCNO>


Webb Hubbell



NAM: "Hubbell” "Hubbells” "Webb Hubbell” "Webb_Hubbell"

PRO: "he” "him” "his"

abc's accountant after again ago all alleges alone also and arranged attorney avoid been before being betray but came can cat charges cheating circle clearly close concluded conspiracy cooperate counsel counsel's department did disgrace do dog dollars earned eightynine enough evasion feel financial firm first four friend friends going got grand happening has he help him hi s hope house hubbell hubbells hundred hush income increase independent indict indicted indictment inner investigating jackie jackie_judd jail jordan judd jury justice kantor ken knew lady late law left lie little make many mickey mid money mr my nineteen nineties ninetyfour not nothing now office other others paying peter_jennings president's pressure pressured probe prosecutors questions reported reveal rock saddened said schemed seen seven since starr statement such tax taxes tell them they thousand time today ultimately vernon washington webb webb_hubbell were what's whether which white whitewater why wife years



Knowledge base population
Knowledge Base Population

  • The 2009 NIST Text Analysis Conference (TAC) will include a new Knowledge Base Population track

  • Goal: discover information about named entities (people, organizations, places) and incorporate it into a KB

  • TAC KBP has two related tasks:

    • Entity linking: doc. entity mention -> KB entity

    • Slot filling: given a document entity mention, find missing slot values in large corpus

Kbs and ie are symbiotic
KBs and IE are Symbiotic

KB info helps interpret text


Information Extraction from Text

IE helps populate KBs

Wikitology 3.0 (2009)
















Page LinkGraph








LinkedSemanticWeb data &ontologies

Wikipedia s social network
Wikipedia’s social network

  • Wikipedia has an implicit ‘social network’ that can help disambiguate PER mentions

  • Resolving PER mentions in a short document to KB people who are linked in the KB is good

  • The same can be done for the network of ORG and GPE entities

Wsn data
WSN Data

  • We extracted 213K people from the DBpedia’s Infobox dataset, ~30K of which participate in an infobox link to another person

  • We extracted 875K people from Freebase, 616K of were linked to Wikipedia pages, 431K of which are in one of 4.8M person-person article links

  • Consider a document that mentions two people: George Bush and Mr. Quayle

Which bush which quayle
Which Bush & which Quayle?

Six George Bushes

Nine Male Quayles

A simple closeness metric
A simple closeness metric

Let Si = {two hop neighbors of Si}

Cij = |intersection(Si,Sj)| / |union(Si,Sj) |

Cij>0 for six of the 56 possible pairs

0.43 George_H._W._Bush -- Dan_Quayle

0.24 George_W._Bush -- Dan_Quayle

0.18 George_Bush_(biblical_scholar) -- Dan_Quayle

0.02 George_Bush_(biblical_scholar) -- James_C._Quayle

0.02 George_H._W._Bush -- Anthony_Quayle

0.01 George_H._W._Bush -- James_C._Quayle

Application to tac kbp
Application to TAC KBP

  • Using entity network data extracted from Dbpedia and Wikipedia provides evidence to support KBP tasks:

    • Mapping document mentions into infobox entities

    • Mapping potential slot fillers into infobox entities

    • Evaluating the coherence of entities as potential slot fillers


  • The Semantic Web approach is a powerful approach for data interoperability and integration

  • The research focus is shifting to a “Web of Data” perspective

  • Many research issue remain: uncertainty, provenance, trust, parallel graph algorithms, reasoning over billions of triples, user-friendly tools, etc.

  • Just as the Web enhances human intelligence, the Semantic Web will enhance machine intelligence

  • The ideas and technology are still evolving