1 / 46

Web Technologies Search Engines

ITEC547 Text Mining. Web Technologies Search Engines. Outline of Presentation. 1. 4. 3. 2. 5. Early Search Engines. Queries. Indexing Multimedia. Indexing Text for Search. Searching an Index. 1. Early Search Engines. History, Problems, Solutions …. Rest In Peace.

nerita
Download Presentation

Web Technologies Search Engines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ITEC547 Text Mining Web TechnologiesSearch Engines

  2. Outline of Presentation 1 4 3 2 5 Early Search Engines Queries Indexing Multimedia Indexing Text for Search Searching an Index

  3. 1 Early Search Engines History, Problems, Solutions …

  4. Rest In Peace Open Text (1995-1997) Magellan (1995-2001) Infoseek (Go) (1995-2001) Snap (NBCi)(1997-2001) Direct Hit (1998-2002)

  5. Changing Lycos (1994; reborn 1999) WebCrawler (1994; reborn 2001) Yahoo (1994; reborn 2002) Excite (1995; reborn 2001) HotBot (1996; reborn 2002) Ask Jeeves (1998; reborn 2002)

  6. Same As They Ever Were AltaVista (1995- ) LookSmart (1996- ) Overture (1998- )

  7. The New Breed Google (1998- ) AllTheWeb(1999- ) Teoma(2000- ) WiseNut (2001- )

  8. Information Retrieval • The indexing and retrieval of textual documents. • Searching for pages on the World Wide Web is the most recent and perhaps most widely used IR application • Concerned firstly with retrieving relevantdocuments to a query. • Concerned secondly with retrieving from large sets of documents efficiently.

  9. Typical IR Task • Given: • A corpus of textual natural-language documents. • A user query in the form of a textual string. • Find: • A ranked set of documents that are relevant to the query.

  10. Document corpus Query String 1. Doc1 2. Doc2 3. Doc3 . . Ranked Documents Typical IR System Architecture IR System

  11. EARLY SEARCH ENGINES • Initially used in academic or specialized domains. • Legal and specialized domains consume a large amount of textual info • Use of expensive proprietary hardware and software • High computational and storage requirements • Boolean query model • Iterative search model • Fetch documents in many steps

  12. Medline of National Library of Medicine • Developed in late 1960 and made available in 1971 • Based on inverted file organization • Boolean query language • Queries broken down and numbered into segments • Results of a queries fed into the next query segment • Each user assigned a time slot • If cycle not completed in time slot, most recent results are returned • Query and browse operations performed as separate steps • Following a query, results are viewed • Modifications start a new query-browse cycle

  13. Dialog • Broader subject content • Specialized collections of data on payment • Boolean query • Each term numbered and executed separately then combined • Word patterns • For multiword queries proximity operator W

  14. 2 Indexing Text for Search Reduce retrieval time improve hit accuracy

  15. Why Index • Simplest approach search text sequentially • Size must be small • Static, semistatic index • Inverted Index • mapping from content, such as words or numbers, to its locations in a database file, or in a document or a set of documents. • Documents/Positions in Documents/Weight • Fuzzy/Stemming/Stopwords

  16. Example Inverted Index • "a": {2} • "banana": {2} • "is": {0, 1, 2} • "it": {0, 1, 2} • "what": {0, 1} T1 : "it is what it is“ T2 : "what is it“ T3 : "it is a banana"

  17. Example Full Inverted Index • T1 : "it is what it is“ • T2 : "what is it“ • T3 : "it is a banana" "a": {(2, 2)} "banana": {(2, 3)} "is": {(0, 1), (0, 4), (1, 1), (2, 1)} "it": {(0, 0), (0, 3), (1, 2), (2, 0)} "what": {(0, 2), (1, 0)}

  18. Inverted Index

  19. Inverted Index

  20. Google Index • A unique DocId associated with each URL • Hit: word occurences • wordID: 24 bit number • Word position • Font size relative to the rest of the document • Plain hit : in the document • Fancy hit : in the URL, title, anchor text, meta tags • Word occurrences of a web page are distributed across a set of barrels

  21. Architecture of the 1st Google Engine

  22. Architecture of the 1st Google Engine

  23. Architecture of the 1st Google Engine

  24. 3 Indexing Multimedia Broadcast and compress for seamless delivery

  25. Indexing Multimedia • Forming an index for multimedia • Use context : surrounding text • Add manual description • Analyze automatically and attach a description

  26. 4 Queries

  27. Queries Keywords Proximity Patterns Phrases Ranges Weights of keywords Spelling mistakes

  28. Queries • Boolean query • No relevance measure • May be hard to understand • Multimedia query • Find images of Everest • Find x-rays showing the human rib cage • Find companies whose stock prices have similar patterns

  29. Relevance • Relevance is a subjective judgment and may include: • Being on the proper subject. • Being timely (recent information). • Being authoritative (from a trusted source). • Satisfying the goals of the user and his/her intended use of the information (information need).

  30. Keyword Search • Simplest notion of relevance is that the query string appears verbatim in the document. • Slightly less strict notion is that the words in the query appear frequently in the document, in any order (bag of words).

  31. Problems with Keywords • May not retrieve relevant documents that include synonymous terms. • “restaurant” vs. “café” • “PRC” vs. “China” • May retrieve irrelevant documents that include ambiguous terms. • “bat” (baseball vs. mammal) • “Apple” (company vs. fruit) • “bit” (unit of data vs. act of eating)

  32. Relevance Feedback • User enters query terms • Keywords maybe weighted or not • Links returned • Choose the relevant and irrelevant ones • If there is no negative feedback second term is 0 • T’s are terms from relevant and irrelevant sets marked by the user

  33. 5 SEARCHING AN INDEX Searching an Index

  34. Searching an Inverted Index Tokenize the query, search index vocabulary for each query token Get a list of documents associated with each token Combine the list of documents using constraints specified in the query

  35. Google Search • Tokenize query and remove stopwords • Translate the query words into wordIDs using the lexicon • For every wordID get the list of documents from the short inverted barrel and build a composite set of documents • Scan the composite list of documents • Skip to next document if the current document does not match • Compute a rank using query and features • If no more documents go to step 3 and use full inverted barrels to find more docs • If there are sufficient # of docs go to step 5 • Sort the final Document List by rank

  36. How are results ranked? Weight type Location: title,URL, anchor,body Size: relative font size Capitalization Count occurences Closeness (proximity)

  37. Evaluation Response time quality Recall : % of correct items that are selected Precision : % of selected items that are correct

  38. Ranking Algorithms : Hyperlink • Popularity Ranking • Rank “popular” documents higher among set of documents with specific keywords. • Determining “Popularity” • Access rate ? • How to get accurate data? • Bookmarks? • Might be private? • Links to related pages? • Using web crawler to analyze external links.

  39. Popularity/Prestige • transfer of prestige • a link from a popular page x to a page y is treated as conferring more prestige to page y than a link from a not-so-popular page z. • Count of In-links/Out-links

  40. Hypertext Induced Topic Search (HITS) • The HITS algorithm: • compute popularity using set of related pages only. • Important web pages : cited by other important web pages or a large number of less-important pages • Initially all pages have same importance

  41. Hubs and Authorities • Hub - A page that stores links to many related pages • may not in itself contain actual information on a topic • Authority - A page that contains actual information on a topic • may not store links to many related pages • Each page gets a prestige value as a hub (hub-prestige), and another prestige value as an authority (authority-prestige).

  42. Hubs and Authorities in twitter

  43. Hubs and Authorities algorithm • Locate and build the subgraph • Assign initial values to hub and authority scores of each node • Run a loop till convergence • Assign the sum of the hub scores of all nodes y that link to node x to the authority score of x • Assign the sum of the authority scores of all nodes y that are linked from node x to node y to hub score of node x • Normalize the hub and authority scores of all nodes • Check for convergence. Is the difference< threshold? • Return the list of nodes sorted in descending order of hub and authority scores

  44. Page Rank Algorithm • Ranks based on citation statistics • In/out links • Rank of a page depends on the ranks of the pages that link to it.

  45. Page rank Algorithm • Locate and build subgraph • Save the number of out-links from every node in an array • Assign a default PageRank to all nodes • Run a loop till convergence • Compute a new PageRank score for every node. Assign the sum of PageRank scores divided by the number of out-links of every node that links to a node and add the default rank source • Check convergence. Is the difference between new and old PageRank< threshold?

  46. ? But wait…There’s Homework!1-Explain web crawlingand the general architecture of a web crawler.2- What is the use of robots.txt?3- Find a web crawler code and explain how it can be used to collect information on ?4-Crawl the social media to collect emu related info.

More Related