1 / 91

Big Data course

Big Data course. Imam Khomeini international University, 2019 Dr. Ali Khaleghi | Kamran Mahmoudi. Session seven. Batch processing and map reduce. Session objective: Information retrieval concept Apache Solr Introduction. Chapter 1-Boolean retrieval.

carlucci
Download Presentation

Big Data course

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Big Data course Imam Khomeini international University, 2019 Dr. Ali Khaleghi | Kamran Mahmoudi

  2. Session seven Batch processing and map reduce • Session objective: • Information retrieval concept • Apache Solr Introduction

  3. Chapter 1-Boolean retrieval Definition of Information Retrieval: Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers).

  4. Chapter 1-Boolean retrieval An example Information Retrieval Problem

  5. Chapter 1-Boolean retrieval An example Information Retrieval Problem • Which plays of Shakespeare contain the words? BRUTUS ANDCAESAR, but not CALPURNIA • Linear Scanning. • But for many purposes, we do need more: • To process large document collections quickly. • To allow more flexible matching operations (Near operation). • To allow ranked retrieval.

  6. Chapter 1-Boolean retrieval Indexing: • term-document incident matrix 1 if play contains word, 0 otherwise BrutusANDCaesarBUTNOTCalpurnia

  7. Chapter 1-Boolean retrieval • So we have a 0/1 vector for each term. • To answer the query: BRUTUS AND CAESAR AND NOT CALPURNIA • Take the vectors for BRUTUS, CAESAR AND NOT CALPURNIA • Complement the vector of CALPURNIA • Do a (bitwise) and on the three vectors • 110100 AND 110111 AND 101111 = 100100

  8. Chapter 1-Boolean retrieval Result:

  9. Chapter 1-Boolean retrieval • Boolean retrieval model: • Assessing the effectivenessof an IR system: • Precision. • Recall. • Difference between information need and query.

  10. Chapter 1-Boolean retrieval Bigger Collections • Consider N = 1 million documents, each with about 1000 words. • There are M = 500K distinctterms among these. • 500K x 1M matrix has half-a-trillion 0’s and 1’s. • But it has no more than one billion 1’s. • Matrix is extremely sparse (at least 99.8%).

  11. Chapter 1-Boolean retrieval dictionary postings • Inverted Index: For each termt, we store a list of all documents that contain t.

  12. Chapter 1-Boolean retrieval • Inverted Index Construction: • Collect the documents to be indexed: • Tokenise the text, turning each document into a list of tokens: • Normalise tokens: • Create the inverted index.

  13. Chapter 1-Boolean retrieval • Indexer steps: Doc 1 Doc 2 I did enact Julius Caesar I was killed i’ the Capitol; Brutus killed me. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious

  14. Chapter 1-Boolean retrieval • Sort by terms • And then docID

  15. Chapter 1-Boolean retrieval • Multiple term entries in a single document are merged. • Split into Dictionary and Postings. • Doc. frequency information is added.

  16. Chapter 1-Boolean retrieval • Exercise 1-1: Draw the incidence matrix and inverted index for below documents: Doc 1: new home sales top forecasts Doc 2: home sales rise in july Doc 3: increase in home sales in july Doc 4: july new home sales rise

  17. Chapter 1-Boolean retrieval • Processing Boolean queries: Brutus and Calpurnia • Locate Brutus in the Dictionary. • Retrieve its postings. • Locate Calpurnia in the Dictionary. • Retrieve its postings. • Intersect the two postings lists.

  18. Chapter 1-Boolean retrieval • Intersecting two posting lists:

  19. What's up now ? The image search & semantics

  20. Why Image Search !? Why are we interested in searching for images ? How much of the data is in text format ? (Probably a lot) Can you always describe your information need using keywords ?

  21. In other words In this relatively new method for information retrieval, a query does not consist of text, but of an image file. The search results lead to images on the WWW and also to related documents. Other names for this method are • - Search(ing) by example • - Reverse image search(ing) • - Reverse image lookup = RIL • - Backwards image search(ing) • - Inside search(ing) • - Content-based information retrieval = CBIR

  22. searching for related images [1]

  23. ex.1 : image search reveal more info. [1]

  24. Ex.2 : the evolution, we need not only the image features but also the semantics ! [1]

  25. Search engine understands the context ! [1]

  26. The magic business ! • Lets foresee the future of the Information Retrieval. • Search engines to become : • Knowledge engines • Answer Engines

  27. answers !

  28. Ch. 8: Web Crawling By Filippo Menczer Indiana University School of Informatics in Web Data Mining by Bing Liu Springer, 2007

  29. Outline Motivation and taxonomy of crawlers Basic crawlers and implementation issues Universal crawlers Preferential (focused and topical) crawlers Evaluation of preferential crawlers Crawler ethics and conflicts New developments: social, collaborative, federated crawlers

  30. Q: How does a search engine know that all these pages contain the query terms? A: Because all of those pages have been crawled

  31. starting pages (seeds) Crawler:basic idea

  32. Many names Crawler Spider Robot (or bot) Web agent Wanderer, worm, … And famous instances: googlebot, scooter, slurp, msnbot, …

  33. Motivation for crawlers Support universal search engines (Google, Yahoo, MSN/Windows Live, Ask, etc.) Vertical (specialized) search engines, e.g. news, shopping, papers, recipes, reviews, etc. Business intelligence: keep track of potential competitors, partners Monitor Web sites of interest Evil: harvest emails for spamming, phishing… … Can you think of some others?…

  34. A crawler within a search engine googlebot Web Page repository Text & link analysis Query hits Text index PageRank Ranker

  35. One taxonomy of crawlers Many other criteria could be used: Incremental, Interactive, Concurrent, Etc.

  36. Outline Motivation and taxonomy of crawlers Basic crawlers and implementation issues Universal crawlers Preferential (focused and topical) crawlers Evaluation of preferential crawlers Crawler ethics and conflicts New developments: social, collaborative, federated crawlers

  37. Basic crawlers This is a sequential crawler Seeds can be any list of starting URLs Order of page visits is determined by frontier data structure Stop criterion can be anything

  38. Graph traversal (BFS or DFS?) Breadth First Search Implemented with QUEUE (FIFO) Finds pages along shortest paths If we start with “good” pages, this keeps us close; maybe other good stuff… Depth First Search Implemented with STACK (LIFO) Wander away (“lost in cyberspace”)

  39. A basic crawler in Perl Queue: a FIFO list (shift and push) my @frontier = read_seeds($file); while (@frontier && $tot < $max) { my $next_link = shift @frontier; my $page = fetch($next_link); add_to_index($page); my @links = extract_links($page, $next_link); push @frontier, process(@links); }

  40. Implementation issues Don’t want to fetch same page twice! Keep lookup table (hash) of visited pages What if not visited but in frontier already? The frontier grows very fast! May need to prioritize for large crawls Fetcher must be robust! Don’t crash if download fails Timeout mechanism Determine file type to skip unwanted files Can try using extensions, but not reliable Can issue ‘HEAD’ HTTP commands to get Content-Type (MIME) headers, but overhead of extra Internet requests

  41. More implementation issues Fetching Get only the first 10-100 KB per page Take care to detect and break redirection loops Soft fail for timeout, server not responding, file not found, and other errors

  42. More implementation issues: Parsing HTML has the structure of a DOM (Document Object Model) tree Unfortunately actual HTML is often incorrect in a strict syntactic sense Crawlers, like browsers, must be robust/forgiving Fortunately there are tools that can help E.g. tidy.sourceforge.net Must pay attention to HTML entities and unicode in text What to do with a growing number of other formats? Flash, SVG, RSS, AJAX…

  43. More implementation issues Stop words Noise words that do not carry meaning should be eliminated (“stopped”) before they are indexed E.g. in English: AND, THE, A, AT, OR, ON, FOR, etc… Typically syntactic markers Typically the most common terms Typically kept in a negative dictionary 10–1,000 elements E.g. http://ir.dcs.gla.ac.uk/resources/linguistic_utils/stop_words Parser can detect these right away and disregard them

  44. More implementation issues Conflation and thesauri Idea: improve recall by merging words with same meaning We want to ignore superficial morphological features, thus merge semantically similar tokens {student, study, studying, studious} => studi We can also conflate synonyms into a single form using a thesaurus 30-50% smaller index Doing this in both pages and queries allows to retrieve pages about ‘automobile’ when user asks for ‘car’ Thesaurus can be implemented as a hash table

  45. More implementation issues Stemming Morphological conflation based on rewrite rules Language dependent! Porter stemmer very popular for English http://www.tartarus.org/~martin/PorterStemmer/ Context-sensitive grammar rules, eg: “IES” except (“EIES” or “AIES”) --> “Y” Versions in Perl, C, Java, Python, C#, Ruby, PHP, etc. Porter has also developed Snowball, a language to create stemming algorithms in any language http://snowball.tartarus.org/ Ex. Perl modules: Lingua::Stem and Lingua::Stem::Snowball

  46. More implementation issues Static vs. dynamic pages Is it worth trying to eliminate dynamic pages and only index static pages? Examples: http://www.census.gov/cgi-bin/gazetteer http://informatics.indiana.edu/research/colloquia.asp http://www.amazon.com/exec/obidos/subst/home/home.html/002-8332429-6490452 http://www.imdb.com/Name?Menczer,+Erico http://www.imdb.com/name/nm0578801/ Why or why not? How can we tell if a page is dynamic? What about ‘spider traps’? What do Google and other search engines do?

  47. More implementation issues Relative vs. Absolute URLs Crawler must translate relative URLs into absolute URLs Need to obtain Base URL from HTTP header, or HTML Meta tag, or else current page path by default Examples Base: http://www.cnn.com/linkto/ Relative URL: intl.html Absolute URL: http://www.cnn.com/linkto/intl.html Relative URL: /US/ Absolute URL: http://www.cnn.com/US/

  48. More implementation issues URL canonicalization All of these: http://www.cnn.com/TECH http://WWW.CNN.COM/TECH/ http://www.cnn.com:80/TECH/ http://www.cnn.com/bogus/../TECH/ Are really equivalent to this canonical form: http://www.cnn.com/TECH/ In order to avoid duplication, the crawler must transform all URLs into canonical form Definition of “canonical” is arbitrary, e.g.: Could always include port Or only include port when not default :80

  49. More on Canonical URLs Some transformation are trivial, for example: http://informatics.indiana.edu http://informatics.indiana.edu/ http://informatics.indiana.edu/index.html#fragment http://informatics.indiana.edu/index.html http://informatics.indiana.edu/dir1/./../dir2/ http://informatics.indiana.edu/dir2/ http://informatics.indiana.edu/%7Efil/ http://informatics.indiana.edu/~fil/ http://INFORMATICS.INDIANA.EDU/fil/ http://informatics.indiana.edu/fil/

More Related