1 / 39

Searching the Web

Searching the Web. CS3352. Searching the Web. Three forms of searching Specific queries  encyclopaedia, libraries Exploit hyperlink structure Broad queries  web directories Web directories: classify web documents by subjects Vague queries  search engines index portions of web.

Download Presentation

Searching the Web

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Searching the Web CS3352

  2. Searching the Web • Three forms of searching • Specific queries  encyclopaedia, libraries • Exploit hyperlink structure • Broad queries  web directories • Web directories: classify web documents by subjects • Vague queries  search engines • index portions of web

  3. Problem with the data • Distributed data • High percentage of volatile data – much is generated • Large volume • June 2000 Google full-text index of 560 million URLs • Unstructured data • gifs, pdf etc • Redundant data – mirrors (30% pages are near duplicates) • Quality of data • false, poorly written, invalid, mis-spelt • Heterogeneous data – media, formats, languages, alphabets

  4. Users and the Web • How to specify a query? • How to interpret answers? • Ranking • Relevance selection • Summary presentations • Large document presentation • Main purpose: research, leisure, business, education • 80% do not modify query • 85% look first screen only • 64% queries are unique • 25% users use single keywords • Problem for polysemic words and synonyms

  5. All queries answered without accessing texts – by indices alone Local copies of web pages expensive (Google cache) Remote page access unrealistic Links Link topology, link popularity, link quality, who links Page structure Words in heading > words in text etc Sites Sub collections of documents, mirror site detection Names Presenting summaries Community identification Indexing Refresh rate Similarity engine Ranking scheme Caching and popularity measures Web search

  6. Spamming • Most search engines have rules against • invisible text, • meta tag abuse, • heavyrepetition • "domain spam” • overtly submission of "mirror“sites in an attempt to dominate the listings forparticular terms

  7. Excite Spamming • Excite screens out spamming before adding a page toits web page index. • if it finds a string of wordssuch as: • money money money money money money money • it will replace the excess repetition, so that essentially, thestring becomes: • money xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx • The more Excite detects unusualrepetition, the more heavily it will penalize a page. • Excite does not penalize for the use of hidden text, but penalties will apply if hidden text is used to disguise spamcontent. • Excite penalises "domain spam."

  8. Centralised architecture • Crawler-indexer (most search engine) • Crawler • Robot, spider, wanderer, walker, knowbot • Program that traverses web to send new or update pages to main server (where they are indexed) • Run on local server and send request to remote servers • Centralised use of index to answer queries

  9. Query engine Index Interface Indexer Crawler Example (AltaVista) 1998: 20 multi-processor machines 130 GB of RAM, 500 GB disk space

  10. Distributed architecture • Harvest: harvest.transarc.com • Gatherers: • Collect and extract indexing information from one or more web servers at periodic time • Brokers • Provide indexing mechanism and query interface to data gathered • Retrieve information from gatherers or other brokers, updating incrementally their indices

  11. Broker Replication Manager Broker User Gatherer Object cache Web site Harvest architecture

  12. Ranking algorithms • Variations of Boolean and vector space model • TF  IDF plus • Hyperlinks between pages • pages pointed to by a retrieved page • pages that point to a retrieved page • Popularity: number of hyperlinks to a page • Relatedness: number of hyperlinks common in pages or pages referenced by same pages • WebQuery • PageRank (Google) • HITS (Clever)

  13. Lets Use Links!

  14. Metasearch • Web server that sends query to • Several search engines • Web directories • Databases • Collect results • Unify them (Data fusion) • Aim: better coverage • Issues • Translation of query • Uniform result (fusion rankings, e,g, pages retrieved by several engines) • Wrappers

  15. Google • The best web engine: • comprehensive and relevant results • Biggest index • 580 million pages visited and recorded • Uses link data to get to another 500 million pages • Different kinds of index • smaller indexes containing a higher amount of the web's most popular pages, as determined by Google's link analysis system. • Index refresh • Updated monthly/weekly • Daily for popular pages • Serves queries from three data centres • two on West Coast of the US, one on East Coast.

  16. Google: let this inspire you… • Larry Page, Co-founder & Chief Executive Officer • Sergey Brin, Co-founder & President • PhD students at Stanford

  17. Google Overview • Crawls the web to create its listings. • Combines traditional IR text matching with extremely heavy use oflink popularity to rank the pages it has indexed. • Otherservices also use link popularity, but none do to the extentthat Google does.

  18. Citation Importance Ranking

  19. Google links • Submission: • Add URL page (no need to do a "deep" submit) • Best way to ensure that your site is indexed is to build links. The more other sites are pointing atyou, the more likely you will be crawled and ranked well. • Crawling and Index Depth: • aims to refresh its index on a monthly basis, • if Google doesn't actually index a pages, it may still return it in a searchbecause it makes extensive use of the text within hyperlinks. • This text is associated with the pages the link points at, andit makes it possible for Google to find matching pages evenwhen these pages cannot themselves be indexed.

  20. Google Relevancy (1) • Google ranks web pages based on thenumber, quality and content of links pointing at them (citations). • Number of Links • All things being equal, a page with morelinks pointing at it will do better than a page with few or nolinks to it. • Link Quality • Numbers aren't everything. A single link froman important site might be worth more than many links fromrelatively unknown sites.

  21. Google Relevancy (2) • Link Content • The text in and around linksrelatesto the page they point at. For apage to rank well for "travel," it would need to have manylinks that use the word travel in them or near them on thepage. It also helps if the page itself is textuallyrelevant for travel • Ranking boosts on text styles • The appearance of terms in bold text, or in header text, or ina large font size is all taken into account. None of these aredominant factors, but they do figure into the overallequation.

  22. PageRank • Usage simulation & Citation importance ranking: • Based on a model of a Web surfer who follows links and makes occasional haphazard jumps, arrivingat certain places more frequently than others. • User randomly navigates • Jumps to random page with probability p • Follows a random hyperlink from the page with probability 1-p • Never goes back to a previously visited page by following a previously traversed link backwards • Google finds a single type of universallyimportant page--intuitively, locations that are heavily visited in a random traversal of theWeb's link structure.

  23. Wi ni PageRank • Process modelled by Markov Chain • probability of being in each page is computed, p set by the system • Wj = PageRank of page j • ni = number of outgoing links on page i • m is the number of nodes in G, that is, the number of Web pages in the collection • Ranking of other pages is normalized by the number of links in the page • Computed using an iterative method  p + (1- p) Wj = m (i,j)  G | i  j

  24. W1 W3 W2 n3 n2 n1 p m PageRank Wi 1 Wk 1 + Wj Wi 2 (1- p) Wk 2 + Wi 3

  25. Google Content • Performs a full-text index of the pages it visits. • Itgathers all visible text. • It does not read either the metakeywords or description tags. • Descriptions are formedautomatically by extracting the most relevant portions ofpages. • If a page has no description, it is probably becauseGoogle has never actually visited it.

  26. Google Spamming • Link popularity rankingsystem leaves it relatively immune to traditional spammingtechniques. • Goes beyond the text on pages todecide how good they are. No links, low rank. • Common spam idea • create a lot of new pages within a site that link to a single page, in aneffort to boost that page's popularity, perhaps spreading out these pages across a network ofsites. • Unlikely to work, do real link building instead with non-competitive sites that are related to yours.

  27. Site identification

  28. AltaVista

  29. HITS: Hypertext Induced Topic Search • The ranking scheme depends on the query • Considers the set of pages that point to or are pointed at by pages in the answer S • Implemented in IBM;s Clever Prototype • Scientific American Article: • http://www.sciam.com/1999/0699issue/0699raghavan.html

  30. HITS (2) • Authorities: • Pages that have many links point to them in S • Hub: • pages that have many outgoing links • Positive two-way feedback: • better authority pages come from incoming edges from good hubs • better hub pages come from outgoing edges to good authorities

  31. Authorities and Hubs Authorities ( blue ) Hubs (red)

  32. HITS two step iterative process • assigns initial scores to candidate hubs and authorities on a particular topic in set of pages S • use the current guesses about the authorities to improve the estimates of hubs—locate all the best authorities • use the updated hub information to refine the guesses about theauthorities--determine where the best hubs point most heavily and call these the goodauthorities. • Repeat until the scores eventually converge to the principle eigenvector of the link matrix of S, which can thenbe used to determine the best authorities and hubs.  A(u) H(p) = u  S | p  u  H(u) A(p) = v  S | v  p

  33. HITS issues (3) • Restrict set of pages to a maximum number • Doesn’t work with non-existent, repeated or automatically generated links • Weighting links on surrounding content • Diffusion of the topic • A more general topic contains the original answer • Analyse the content of each page and score that, combining link weight with page score. • Sub-grouping links • HITS Used for web community identification

  34. Cybercommunities

  35. Google assigns initial rankings andretains them independently of any queries -- enables faster response. looks only in the forward direction, from link to link. Clever assembles a different root setfor each search term and then prioritizes those pages in the context of that particular query. also looks backward from an authoritative page to see what locations are pointing there. Humans are innately motivated to create hub-like content expressing their expertise on specific topics. Google vs Clever

  36. Autonomy • High performance pattern matching based on Bayesian inference networks • Identifies patterns in text based on usage and term frequency that correspond to concepts • X% probability that a document is about a subject • Encode the signature • Categorize it • Link it to related documents with same signature • Not just search engine tool.

  37. Human powered searching • Ask Jeeves • an answer service using human editorsto build the knowledgebase. Editors proactively suggest questionsand watchwhat people are actually searching for. • Yahoo • uses humans to organize the web. Humaneditors find sites or review submissions, then place thosesites in one or more "categories" that are relevant to them.

  38. Research Issues • Modelling • Querying • Distributed architecture • Ranking • Indexing • Dynamic pages • Browsing • User interface • Duplicated data • Multimedia

  39. Further reading • http://searchenginewatch.com/ • http://www.clpgh.org/clp/Libraries/search.html

More Related