1 / 56

Link Analysis: Current State of the Art

Learn about text mining and how it can help extract relevant information from documents, saving time and effort. Explore different approaches and techniques for building information extraction systems.

gberry
Download Presentation

Link Analysis: Current State of the Art

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Link Analysis: Current State of the Art Ronen Feldman Computer Science Department Bar-Ilan University, ISRAEL ronenf@gmail.com

  2. Introduction to Text Mining

  3. Actual information buried inside documents Extract Information from within the documents TM != Search Find Documents matching the Query Display Information relevant to the Query Long lists of documents Aggregate over entire collection

  4. Let Text Mining Do the Legwork for You Text Mining Find Material Read Understand Consolidate Absorb / Act

  5. What Is Unique in Text Mining? • Feature extraction. • Very large number of features that represent each of the documents. • The need for background knowledge. • Even patterns supported by small number of document may be significant. • Huge number of patterns, hence need for visualization, interactive exploration.

  6. Document Types • Structured documents • Output from CGI • Semi-structured documents • Seminar announcements • Job listings • Ads • Free format documents • News • Scientific papers

  7. Text Representations • Character Trigrams • Words • Linguistic Phrases • Non-consecutive phrases • Frames • Scripts • Role annotation • Parse trees

  8. The 100,000 foot Picture

  9. <Facility>Finsbury Park Mosque</Facility> <Country>England</Country> <Country>France</Country> <Country>England</Country> <Country>Belgium</Country> <Country>United States</Country> <Person>Abu Hamza al-Masri</Person> <PersonPositionOrganization> <OFFLENOFFSET="3576" LENGTH=“33" /> <Person>Abu Hamza al-Masri</Person> <Position>chief cleric</Position> <Organization>Finsbury Park Mosque</Organization> </PersonPositionOrganization> <City>London</City> <PersonArrest> <OFFLENOFFSET="3814" LENGTH="61" /> <Person>Abu Hamza al-Masri</Person> <Location>London</Location> <Date>1999</Date> <Reason>his alleged involvement in a Yemen bomb plot</Reason> </PersonArrest> Intelligent Auto-Tagging (c) 2001, Chicago Tribune. Visit the Chicago Tribune on the Internet at http://www.chicago.tribune.com/ Distributed by Knight Ridder/Tribune Information Services. By Stephen J. Hedges and Cam Simpson ……. The Finsbury Park Mosque is the center of radical Muslim activism in England. Through its doors have passed at least three of the men now held on suspicion of terrorist activity in France, England and Belgium, as well as one Algerian man in prison in the United States. ``The mosque's chief cleric, Abu Hamza al-Masri lost two hands fighting the Soviet Union in Afghanistan and he advocates the elimination of Western influence from Muslim countries. He was arrested in London in 1999 for his alleged involvement in a Yemen bomb plot, but was set free after Yemen failed to produce enough evidence to have him extradited. .'' ……

  10. Intelligence Article

  11. Google’s Article

  12. Merger

  13. Leveraging Content Investment • Any type of content • Unstructured textual content (current focus) • Structured data; audio; video (future) • In any format • Documents; PDFs; E-mails; articles; etc • “Raw” or categorized • Formal; informal; combination Text Mining • From any source • WWW; file systems; news feeds; etc. • Single source or combined sources

  14. Information Extraction

  15. Relevant IE Definitions • Entity: an object of interest such as a person or organization. • Attribute: a property of an entity such as its name, alias, descriptor, or type. • Fact: a relationship held between two or more entities such as Position of a Person in a Company. • Event: an activity involving several entities such as a terrorist act, airline crash, management change, new product introduction.

  16. IE Accuracy by Information Type

  17. MUC Conferences

  18. Applications of Information Extraction • Routing of Information • Infrastructure for IR and for Categorization (higher level features) • Event Based Summarization. • Automatic Creation of Databases and Knowledge Bases.

  19. Where would IE be useful? • Semi-Structured Text • Generic documents like News articles. • Most of the information in the document is centered around a set of easily identifiable entities.

  20. Approaches for Building IE Systems • Knowledge Engineering Approach • Rules are crafted by linguists in cooperation with domain experts. • Most of the work is done by inspecting a set of relevant documents. • Can take a lot of time to fine tune the rule set. • Best results were achieved with KB based IE systems. • Skilled/gifted developers are needed. • A strong development environment is a MUST!

  21. Approaches for Building IE Systems • Automatically Trainable Systems • The techniques are based on pure statistics and almost no linguistic knowledge • They are language independent • The main input is an annotated corpus • Need a relatively small effort when building the rules, however creating the annotated corpus is extremely laborious. • Huge number of training examples is needed in order to achieve reasonable accuracy. • Hybrid approaches can utilize the user input in the development loop.

  22. Components of IE System

  23. Why is IE Difficult? • Different Languages • Morphology is very easy in English, much harder in German and Hebrew. • Identifying word and sentence boundaries is fairly easy in European language, much harder in Chinese and Japanese. • Some languages use orthography (like english) while others (like hebrew, arabic etc) do no have it. • Different types of style • Scientific papers • Newspapers • memos • Emails • Speech transcripts • Type of Document • Tables • Graphics • Small messages vs. Books

  24. Link Analysis on Large Textual Networks Social Network Analysis

  25. The Kevin Bacon Game • The game works as follows: given any actor, find a path between the actor and Kevin Bacon that has less than 6 edges. • For instance, Kevin Costner links to Kevin Bacon by using one direct link: Both were in JFK. • Julia Louis-Dreyfus of TV's Seinfeld, however, needs two links to make a path: Julia Louis-Dreyfus was in Christmas Vacation (1989) with Keith MacKechnie. Keith MacKechnie was in We Married Margo (2000) with Kevin Bacon. • You can play the game by using the following URL http://www.cs.virginia.edu/oracle/.

  26. The Erdos Number • A similar idea is also used in the mathematical society and is called the Erdös number of a researcher. • Paul Erdös (1913–1996), wrote hundreds of mathematical research papers in many different areas, many in collaboration with others. • There is a link between any two mathematicians if they co-authored a paper. • Paul Erdös is the root of the mathematical research network and his Erdös number is 0. • Erdös’s co-authors have Erdös number 1. • People other than Erdös who have written a joint paper with someone with Erdös number 1 but not with Erdös have Erdös number 2, and so on.

  27. Running Example

  28. Hijackers by Flight

  29. Automatic layout of networks Pretty Graph Drawing

  30. Motivation I • In order to display large networks on the screen we need to use automatic layout algorithms. These algorithms display the graphs in an aesthetic way without any user intervention. • The most commonly used aesthetic criteria are to expose symmetries and make drawing as compact as possible or alternatively fill the space available for the drawing.

  31. Motivation II • Many of the “higher-level” aesthetic criteria are implicit consequences of: • minimized number of edge crossings • evenly distributed edge length • evenly distributed vertex positions on the graph area • sufficiently large vertex-edge distances • sufficiently large angular resolution between edges.

  32. Disadvantages of the Spring based methods • They are computationally expensive and hence minimizing the energy function when dealing with large graphs is computationally prohibitive. • Since all methods rely on heuristics, there is no guarantee that the “best” layout will be found. • The methods behave as black boxes and hence it is almost impossible to integrate additional constraints on the layout (such as fixing the positions of certain vertices, or specifying the relative ordering of the vertices) • Even when the graphs are planar it is quite possible that we will get edge crossings. • The methods try to optimize just the placement of vertices and edges while ignoring the exact shape of the vertices or the fact the vertices may have labels.

  33. Kamada and Kawai’s (KK) Method

  34. Fruchterman Reingold (FR) Method

  35. Classic Graph Operations

  36. Finding the shortest Path (from Atta)

  37. A better Visualization

  38. Centrality

  39. Degree • If the graph is undirected then the degree of a vertex v  V is the number of other vertices that are directly connected to it. • degree(v) = |{(v1, v2)  E | v1 = v or v2 = v}| • If the graph is directed then we can talk about in-degree or out-degree. An edge (v1,v2)  E in the directed graph is leading from vertex v1 to v2. • In-degree(v) = |{(v1, v)  E }| • Out-degree(v) = |{(v, v2)  E }|

  40. Degree of the Hijackers

  41. Closeness Centrality - Motivation • Degree centrality measures might be criticized because they only take into account the direct connections that an entity has, rather than indirect connections to all other entities. • One entity might be directly connected to a large number of entities that might be pretty isolated from the network. Such an entity is central only in a local neighborhood of the network.

  42. Closeness Centrality • This measure is based on the calculation of the geodesic distance between the entity and all other entities in the network. • We can either use directed or undirected geodesic distances between the entities. • The sum of these geodesic distances for each entity is the "farness" of the entity from all other entities. • We can convert this into a measure of closeness centrality by taking the reciprocal. • In addition, we can normalize the closeness measure by dividing it by the closeness measure of the most central entity.

  43. Closeness : Formally • let d(v1,v2) = the minimal distance between v1 and v2, i.e., the minimal number of vertices that we need to pass on the way from v1 to v2.

  44. Closeness of the Hijackers

  45. Betweeness Centrality • The betweeness centrality measures the effectiveness in which the vertex connects the various parts of the network. • The main idea behind betweeness centrality is that entities that are mediators have more power. Entities that are on many geodesic paths between other pairs of entities are more powerful since they control the flow of information between the pairs.

  46. Betweeness - Formally • Highest Possible Betweeness • gjk = the number of geodetic paths that connect vj with vk • gjk(vi) = the number of geodetic paths that connect vjwith vk and pass via vi.

  47. Betweenness of the Hijackers

  48. Eigen Vector Centrality • The main idea behind eigenvector centrality is that entities receiving many communications from other well connected entities, will be better and more valuable sources of information, and hence be considered central. The Eigenvector centrality scores correspond to the values of the principal eigenvector of the adjacency matrix M. • Formally, the vector v satisfies the equation where l is the corresponding eigenvalue and M is the adjacency matrix.

  49. EigenVector centralities of the hijackers

  50. Power Centrality • Given an adjacency matrix M, the power centrality of vertex i (denoted ci), is given by • a is used to normalize the score; the normalization parameter is automatically selected so that the sum of squares of the vertices’s centralities is equal to the number of vertices in the network. • b is an attenuation factor that controls the effect that the power centralities of the neighboring vertices should have on the power centrality of the vertex.

More Related