Semantics, Syndication and Social Networks: Mechanisms for Future Structured Information Spaces Hamish Cunningham (University of Sheffield) Werner Haas (Johaneum Research) Ant Miller (BBC) Libby Miller (University of Bristol) Ralph Traphoener (Empolis / Bertelsmann)
Dept. Computer Science, University of Sheffield
Different types of metadata allow different types of search (but also incur different costs and have different limits)
full text: "find me Nevsky in Bulgaria"
taxonomy / thesaurus / semantic annotation / ontology: "find me churches in Eastern Europe"
E.g. BBC's INFAX taxonomic system: 66% of searches would fail if only full text
The web promotes diversity but also fragmentation; there's too much of it; less and less impact for curated data
In face of this cultural memory institutions need
Syndication and mediation (to pool outlets and multiply impact); this means presentation-independent, multipurpose content
Users as assistants (to cut the cost of metadata); this can mean shared conceptualisations of content
How do we get there?Why semantic metadata?
The semantic web is about a semantic layer for interoperability, machine-readability, inference – ideal for semantic libraries?
Construction and maintenance of shared taxonomies, terminologies & ontologies is expensive
Annotation of content relative to them is v. expensive
How does a machine tell the difference between "Mother Theresa is a Saint" and "Tony Blair is a Saint"? (Beyond the shallow and the general we get into typical AI problems, the contextual and shifting nature of meaning, etc.)The semantic web and why you can't have it (yet)
Use recommender systems to make the users into curators’ assistants (who tells Google which page is important? other web users do, by linking; also Amazon)
Allow curators and users to DIY simple specific ontologies and KBs (targetted adjuncts to general models like CIDOC)
Use Information Extraction (IE) to populate semantic models
Ride the next wave of social software and on-line communities (Wikis, Bloggs, OSN, file sharing / P2P, RSS/ATOM)Four promising directions
Gartner, December 2002: assistants (who tells Google which page is important? other web users do, by linking; also Amazon)
taxonomic and hierachical knowledge mapping and indexing will be prevalent in almost all information-rich applications
through 2012 more than 95% of human-to-computer information input will involve textual language
to deal with the information deluge we need formal knowledge in semantics-based systems
our archived history is in informal and ambiguous natural language
The challenge: to reconcile these two phenomenaIT context: the Knowledge Economy and Human Language
HLT: Closing the Loop assistants (who tells Google which page is important? other web users do, by linking; also Amazon)
MNLG: Multilingual Natural Language GenerationOIE: Ontology-aware Information ExtractionAIE: Adaptive IECLIE: Controlled Language IE
Formal Knowledge(ontologies andinstance bases)
Information Extraction (IE) pulls facts and structured information from the content of large text collections.
Contrast IE and Information Retrieval
NLP history: from NLU to IE
Progress driven by quantitative measures
MUC: Message Understanding Conferences
ACE: Advanced Content Extraction
General Architecture for Text Engineering (GATE): http://gate.ac.uk/Information Extraction
“The shiny red rocket was fired on Tuesday. It is the brainchild of Dr. Big Head. Dr. Head is a staff scientist at We Build Rockets Inc.”
ST: rocket launch event with various participantsIE Example
Bulgaria brainchild of Dr. Big Head. Dr. Head is a staff scientist at We Build Rockets Inc.
XYZ was establishedon 03 November 1978 in London. It opened a plant in Bulgaria in …
Ontology & KB
A Necessary Trade-Off brainchild of Dr. Big Head. Dr. Head is a staff scientist at We Build Rockets Inc.
Domain specificity vs. task complexity:
Trend 2: the living room is about to be computerised
What will happen when all your living room devices fold into a single PC?
Bill Gates hopes you'll be running Windoze, but Consumer Electronics firms bet on Linux & stable hardware (no viruses, no crashes, cheap, ...)
What if these two trends combine? Ubiquitous on-line communities centred on shared content, with a model of trust
What if memory institutions provide means of organising, explaining, interlinking the cross-over between modern popular culture and the curated memory?
Important because DRM is the beginning of the end of civilisation as we know it (controls how you consume media you buy; has the potential to be linked with censorship and with invasive behaviour logging)
you can't make digital objects behave like physical objects - unless you totally control the hardware and the operating system
if someone has control, then we may end up finding that someone has given the contract for preserving our culture to HaliburtonOpen information, defended communities
C21 social st: all the C20th mistakes but bigger & better?
If you don’t know where you’ve been, how can you know where you’re going?
Libraries, museums, archives: ammunition in the war on ignorance (more dangerous than “terror”?)
Ammunition is useless if you can’t find it: new technology must make our history accessible to all, for all our futuresMemory is not a luxury
Cultural memory can benefit from semantic metadata, presentation-independence and repurposing
Semantic web technology:
no: it won’t make machines intelligent
perhaps: simple specific models can work
Four ways to cross the AI bridge: DIY models; recommenders; IE; OSN + P2P
This talk: http://gate.ac.uk/talks/ecdl-sept-2004.ppt
More: http://gate.ac.uk/●Related projects:Summary