1 / 25

BlogForever Crawler: Techniques and Algorithms to Harvest Modern Weblogs

BlogForever Crawler: Techniques and Algorithms to Harvest Modern Weblogs. Olivier Blanvillain 1 , Nikos Kasioumis 2 , Vangelis Banos 3. 1 Ecole Polytechnique Federale de Lausanne (EPFL) 1015 Lausanne, Switzerland,

questa
Download Presentation

BlogForever Crawler: Techniques and Algorithms to Harvest Modern Weblogs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BlogForever Crawler:Techniques and Algorithmsto Harvest Modern Weblogs Olivier Blanvillain1, Nikos Kasioumis2, Vangelis Banos3 1Ecole PolytechniqueFederale de Lausanne (EPFL) 1015 Lausanne, Switzerland, 2European Organization for Nuclear Research (CERN) 1211 Geneva 23, Switzerland, 3Department of Informatics, Aristotle University, Thessaloniki 54124, Greece

  2. Contents • Introduction: • The disappearing web, • Blog archiving. • Our Contributions • Algorithms • Motivation, • Blog content extraction, • Extraction rules, • Variations for authors, dates and comments. • System Architecture • Evaluation • Comparison with 3 web article extraction systems. • Issues and Future Work

  3. The disappearing web Source: http://gigaom.com/2012/09/19/the-disappearing-web-information-decay-is-eating-away-our-history/

  4. Blog archiving • Why archive the web? • Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. • Blog archiving is a special case of web archiving. • The blogosphere is a live record of contemporary Society, Culture, Science and Economy. • Some blogs contain unique data and valuable information. • Users take action and make important decisions based on this information. • We have a Responsibility to preserve the web.

  5. FP7 EC Funded Project Harvesting • Real-time monitoring • Html data extraction engine • Spam filtering Unstructured information Blog digital repository Blog crawlers Original data and XML metadata • Digital preservation • Quality assurance • Collections curation • Public access APIs • Personalised services • Information retrieval • Public web interface / Browse, search, export Web services Web interface Preserving Managing and reusing http://blogforever.eu/

  6. Our Contributions • A web crawler capable of extracting blog articles, authors, publication dates and comments. • A new algorithm to build extraction rules from blog web feeds with linear time complexity, • Applications of the algorithm to extract authors, publication dates and comments, • A new web crawler architecture, including how we use a complete web browser to render JavaScript web pages before processing them. • Extensive evaluation of the content extraction and execution time of our algorithm against three state-of-the-art web article extraction algorithms.

  7. Motivation • Extracting metadata and content from HTML documents is a challenging task. • Web standards usage is low (<0.5% of websites). • More than 95% of websites do not pass HTML validation. • Having blogs as our target websites, we made the following observations which play a central role in the extraction process: • Blogs provide web feeds: structured and standardized XML views of the latest posts of a blog, • Posts of the same blog share a similar HTML structure. • Web feeds usually have 10-20 posts whereas blogs contain a lot more. We have to access more posts than the ones referenced in web feeds.

  8. Content Extraction Overview • Use blog web feeds and referenced HTML pages as training data to build extraction rules. • Extraction rules capable of locating in HTML page all RSS referenced elements such as: • Title, • Author, • Description, • Publication date, • Use the defined extraction rules to process all blog pages.

  9. Locate in HTML page all RSS referenced elements

  10. Generic procedure to build extraction rules

  11. Extraction rules and string similarity • Rules are XPath queries. • For each rule, we compute the score based on string similarity. • The choice of ScoreFunction greatly influences the running time and precision of the extraction process. • Why we chose Sorensen–Dice coefficient similarity: • Low sensitivity to word ordering and length variations • Runs in linear time

  12. Example: blog post title best extraction rule • RSS feed: http://vbanos.gr/en/feed/ • Find RSS blog post title: “volumelaser.eim.gr” in html page: http://vbanos.gr/blog/2014/03/09/volumelaser-eim-gr-2/ • The Best Extraction Rule for the blog post title is: • /body/div[@id=“page”]/header/h1

  13. Time complexity and linear reformulation Post-order traversal of the HTML tree. Compute node bigrams from their children bigrams.

  14. Variations for authors, dates, comments • Authors, dates and comments are special cases as they appear many times throughout a post. • To resolve this issue, we implement an extra component in the Score function: • For authors: an HTML tree distance between the evaluated node and the post content node. • For dates: we check the alternative formats of each date in addition to the HTML tree distance between the evaluated node and the post content node. • Example: “1970-01-01” == “January 1 1970” • For comments: we use the special comment RSS feed.

  15. System Architecture • Our crawler is built on top of Scrapy (http://www.scrapy.org/)

  16. System Architecture • Pipeline of operations: • Render HTML and JavaScript, • Extract content, • Extract comments, • Download multimedia files, • Propagate resulting records to the back-end. • Interesting areas: • Blog post page identification, • Handle blogs with a large number of pages, • JavaScript rendering, • Scalability.

  17. Blog post identification • The crawler visits all blog pages. • For each URL, it needs to identify whether it is a blog post or not. • We construct a regular expression based on blog post RSS to identify blog posts. • We assume that all posts from the same blog use the same URL pattern. • This assumption is valid for all blog platforms we have encountered.

  18. Handle blogs with a large number of pages • Avoid random walk of pages, depth first search or breadth first search. • Use a priority queue with machine learning defined priorities. • Pages with a lot of blog post URLs have a higher priority. • Use Distance-Weighted kNN classifier to predict. • Whenever a new page is downloaded, it is given to the machine learning system as training data. • When the crawler encounters a new URL, it will ask the machine learning system for the potential number of blog posts and use the value as the download priority of the URL.

  19. JavaScript rendering • JavaScript is widely used client-side language. • Traditional HTML based crawlers do not see web pages using JavaScript. • We embed PhantomJS, a headless web browser with great performance and scripting capabilities. • We instruct the PhantomJS browser to click dynamic JavaScript pagination buttons on pages to retrieve more content (e.g. Disqus Show More button to show comments). • This crawler functionality is non-generic and requires human intervention to maintain and extend to other cases.

  20. Scalability • When aiming to work with a large amount of input, it is crucial to build every system layer with scalability in mind. • The two core crawler procedures NewCrawl and UpdateCrawl are Stateless and Purely Functional. • All shared mutable state is delegated to the back-end.

  21. Evaluation • Task: • Extract articles and titles from web pages • Comparison against three open-source projects: • Readability (Javascript), • Boilerpipe (Java), • Goose (Scala). • Criteria: • Extraction success rate, • Running time. • Dataset: • 2300 blog posts from 230 blogs obtained by the Spinn3r dataset. • System: • Debianlinux 7.2, Intel Core i7-3770 3.4 GHz. • All data, scripts and instructions to reproduce available at: • https://github.com/OlivierBlanvillain/blogforever-crawler-publication

  22. Evaluation: Extraction success rates • BlogForever Crawler competitors are generic: • They do not use RSS feeds. • They do not use structural similarities between web pages. • They can be used with any HTML page.

  23. Evaluation: Running time • our approach spends the majority of its total running time between the initialisation and the processing of the first blog post.

  24. Issues & Future Work • Our main causes of failure was: • The insufficient quality of web feeds, • The high structural variation of blog pages in the same blog. • Future Work • Investigate hybrid extraction algorithms. Combine with other techniques such as word density or spacial reasoning. • Large scale deployment of the software in a distributed architecture.

  25. Thank you! BlogForever Crawler: Techniques and Algorithms to Harvest Modern Weblogs Olivier Blanvillain1, Nikos Kasioumis2, Vangelis Banos3 1Ecole PolytechniqueFederale de Lausanne (EPFL) 1015 Lausanne, Switzerland, 2European Organization for Nuclear Research (CERN) 1211 Geneva 23, Switzerland, 3Department of Informatics, Aristotle University, Thessaloniki 54124, Greece • Contact email: vbanos@gmail.com • Project code available at: • https://github.com/BlogForever/crawler

More Related