introducing the 2011 search ranking factors n.
Skip this Video
Loading SlideShow in 5 Seconds..
Introducing the 2011 Search Ranking Factors PowerPoint Presentation
Download Presentation
Introducing the 2011 Search Ranking Factors

Loading in 2 Seconds...

play fullscreen
1 / 36

Introducing the 2011 Search Ranking Factors - PowerPoint PPT Presentation

Download Presentation
Introducing the 2011 Search Ranking Factors
An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Introducing the2011 Search Ranking Factors Starting at 10:30am PDT Chat with follow webinar attendees: • • Ask questions using the Questions functionality to the right. Technical problems or feedback on the webinar: Please email

  2. Introducing the2011 Search Ranking Factors Available online at: • Presented by: • Matthew Peters, PhD, Scientist • Jamie Steven, VP Marketing

  3. What we’re discussing • The Ranking Factors 2011 • Survey with 132 SEO professionals • Correlation analysis on 10,000+ keywords • Interesting findings • Tour of the web interface • Implications/recommendations • Q+A http:/ Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  4. Understanding, Interpreting & Using Survey Opinion Data Everybody’s wrong sometimes, but there’s a lot we can learn from the aggregation of opinions

  5. #1: Opinions are Not Fact (these are smart people, but they can’t know everything about Google’s rankings) #2: Not Everyone Agrees(standard deviation can help show us the degree of consensus) #3: We Had 132 Contributors (but this group could be biased as they were editorially selected via a nomination process) Many thanks to all who contributed their time to take the survey! Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  6. Understanding, Interpreting & Using Correlation Data This is powerful, useful information, but with that power comes responsibility to present it accurately

  7. Methodology 10,271 Keywords, pulled from Google AdWords US Suggestions (all SERPs were pulled from Google in March 2011, after the Panda/Farmer update) Top 30 Results Retrieved for Each Keyword(excluding all vertical/non-standard results) Correlations are for Pages/Sites that Appear Higher in the Top 30(we use the mean of Spearman’s correlation coefficient across all SERPs) Results Where <2 URLs Contain a Given Feature Are Excluded(this also holds true for results where all the URLs contain the same values for a feature) More details, including complete documentation and the raw dataset is now available at Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  8. Correlation IS NOT Causation Summer weather Correlation & causation Correlation & causation Correlation But not causation Spurious correlation Drowning incidents Ice cream sales Don’t worry, ice cream is good for you! Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  9. Correlation IS NOT Causation Earning more linking root domains to a URL may indeed increase that page’s ranking. But, will adding more characters to the HTML code of a page increase rankings? Probably not. Just because a feature is correlated, even very highly, doesn’t necessarily mean that improving that metric on your site will necessarily improve your rankings. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  10. How Confident Can We Be in the Accuracy of these Correlations? Because we have such a large data set, standard error is extremely low. This means even for small correlations, our estimates of the mean correlation are close to the actual mean correlation across all searches. Standard error won’t be reported in this presentation, but it’s less than 0.0035 for all of Spearman correlation results (so we can feel quite confident about our numbers) Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  11. Do Correlations in this Range Have Value/Meaning? A factor w/ 1.0 correlation would explain 100% of Google’s algorithm across 10K+ keywords Most of our data is in this range A rough rule of thumb with linear fit numbers is that they explain the number squared of the system’s variance. Thus, a factor with correlation 0.3 would explain ~9% of Google’s algorithm. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  12. Are You Ready for Some Data?!

  13. The Changing Landscape of Google’s Ranking Algorithm These compare opinion/survey data from 2009 vs. 2011

  14. In 2009, link-based factors (page and domain-level) comprised ~65% of voters’ algorithmic assessment Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  15. In 2011, link-based factors (page and domain-level) have shrunk in the voters’ minds from ~65% to ~45% of algorithmic components. Note: because the question options changed slightly (and more options were added), direct comparison may not be entirely fair. http:/

  16. What Do SEOs Believe Will Happen w/ Google’s Use of Ranking Features in the Future? While there was some significant contention about issues like paid links and ads vs. content, the voters nearly all agreed that social signals and perceived user value signals have bright futures. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  17. Diversity + Anchor Text:Well Correlated with Higher Rankings These metrics are based on links that point specifically to the ranking page

  18. In the rest of this deck, we’ll use linking c-blocks as a reference point, hence the red  This data is exactly what an SEO would expect – the more diverse the sources, the greater the correlation with higher rankings. These numbers are relatively similar to our June 2010 correlation data (from http:/

  19. Correlations of Page-Level, Anchor Text-Based Link Data No Surprise: Total links (including internal) w/ anchor text is less well-correlated than external links w/ anchor text Partial anchor text matches have greater correlation than exact match. This might be correlation only, or could indicate that the common SEO wisdom to vary anchor text is accurate. http:/

  20. ComparingPage + Domain-Level Link Signals These metrics are based on links that point to anywhere on the ranking domain

  21. Correlation of Domain-Level Link Data Suggests page-level + domain-level link signals have relatively similar weighting, just as voters predicted. Domain-level link data is surprisingly similar to page-level link data in correlation Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  22. Have Exact Match DomainsLost their Lustre? These signals are based on keyword-use in the root domain name.

  23. Spearman’s Correlation with Google Rankings for Exact Match Domain Names June 2010 vs. March 2011 Exact match domains (.com and all TLDs) have both fallen considerably in the past 10 months This suggests that Google’s statements last year about devaluing exact match domains may have not only been serious, but are already getting into the results. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  24. Is Google Evil? Hint: No. These metrics come from a variety of places in the dataset, but mostly on-page stuff.

  25. Google has said that linking externally is good; slow pages are bad; and using Google services won’t give any special benefit. This data supports those statements! This data suggests that, by-and-large, there’s not much “evil” in Google’s rankings, at least, none that correlation research will reveal. Good job keeping it honest, Googlers! Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  26. Social Signals These signals are based on data from users of Twitter, Facebook & Google Buzz via their APIs

  27. Most Important Social Media-Based Factors (as voted on by 132 SEOs) Curious: For Twitter, voters felt authority matters more, while for Facebook, it’s raw quantity (could be because GG doesn’t have as much access to FB graph data). Although we didn’t ask voters for a cutoff on what they believe matters vs. doesn’t, I suspect many/most would have said that Google Buzz and Digg/Reddit/SU aren’t used in the rankings. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  28. Correlation of Social Media-Based Factors(data via Topsy API & Google Buzz API) Facebook Shares is our single highest correlated metric with higher Google rankings. Although voters thought Twitter data / tweets to URLs were more influential, Facebook’s metrics are substantially better correlated with rankings– but is it causal? Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  29. Percent of Results (from our 10,200 Keyword Set) in Which the Feature Was Present It amazed us that Facebook Share data was present for 61% of pages in the top 30 results For most link factors, 99%+ of results in Linkscape’s index had the factor; for social data, this was much lower, but still high enough that standard error is below 0.0025 for each of the metrics. Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  30. Is the correlation with Facebook shares causal? • We have several reasons to think Google might be using FB shares in its ranking algorithm • An interview in December 2010 where they disclosed they were using social signals ( • Facebook share information began appearing in some search results • A modeling approach to determine causation: • Assume that Google uses link information and social signals from Twitter and Google Buzz to rank • Fit a model for Facebook shares using links/Twitter/GB as input • Correlation between predicted shares and search position is almost as large as correlation between actual shares and position • Conclusion: FB shares correlation is likely spurious, i.e. correlation not causation There is an in depth discussion of this topic at: Chat: Twitter: #mozinar Questions? Use GoToWebinar panel

  31. A Tour of the Factors Report Yippie! A break from PowerPoint!

  32. Understanding the implications These results are not a recipe to rank, but common characteristics of top ranking sites

  33. What the results can tell us… 1 Some results show attributes possessed by top ranking sites—attributes that aren’t directly affecting their rank. 2 Some results point to activities that can directly result in higher rankings. Both are characteristics of successful sites, and worth considering for those wishing to succeed online.

  34. A childhood example… Cool tree house club The kids in the tree house may have cool stuff, but that doesn’t mean that having cool stuff will get you in the tree house! Only cool kids are allowed in the tree house club! Cool bike Cool shoes Cool jacket http:/ 1988 Nike Air Jordansdo, in fact, correlate with coolness

  35. The same goes for marketing… Top of the SERPs! Even if the below don’t get you a higher rank they result in a better site, awesome content, and a valuable brand! Only the best sites are allowed at the top of the SERP! Compelling content Rich, unique and interesting A great brand Awareness, reputation, service A fantastic website Accessible, user friendly, relevant Repeat visits, lower bounce! Get links, shares and tweets! Mentioned and recommended! http:/ 1988 Nike Air Jordan’s do correlate with coolness

  36. Q+A Try SEOmoz PRO and get entered to win one of 14 conferences passes! — ends June 30th, 2011! • Matthew Peters, PhD, Scientist • Email: • Jamie Steven, VP Marketing • Email: • Twitter: @jamies The full report are available!