Ultimate Guide to White Hat SEO using Scrapebox. What is Scrapebox, Keybord scraper, Scrape URLs with Scrapebox, Find Guest Blogging opportunities, Check the value of harvested links, Merge and remove duplicates, Scrapebox Meta Scraper.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Scrapebox – whatisit?
Checkthevalue of harvestedlinks
Merge and removeduplicates
Scrapebox Meta Scraper
Checkwhichinternallinksare not indexedyet
Get morebacklinksfrom Google
More than a yearago, on my G+ profile, I posted about something that I found funny: using Scrapebox for white hat.
During this year a lot has changed, so now we know we need to focus more and more on the quality of the backlinks instead of quantity.
This means that we have to rethink which tools should we use and how they can help us maximize our SEO
I bet everybody knows Scrapebox, more or less. In short – it’s a tool used for mass scraping, harvesting, pinging and posting tasks in order to maximize the amount of links you can gain for your website to help it rank better in Google. A lot of webmasters and blog owners treat Scrapebox like a spam machine, but in fact it is only a tool, and it what it’s actually used for depends on the “driver”.
Now, due to all the Penguin updates, a lot of SEOs have changed their minds about linkbuilding and have started to use Scrapebox as support for their link audits or outreach.
One of the most massive things in Scrapebox that I use all the time is the integrated Google suggested keywords scraper. It works very simply and allows you to get a list of keywords you should definitely use while optimizing your website content or preparing new blog post very, very quickly. To do this, just click on the “Scrape” button in the “Harvester” box and select “Keyword Scraper”. You will see a Keyword Scraper window like this one:
So: we have our keyword research done (after checking the total amount of traffic that keywords can bring to your domain) – now let’s see if we can get some interesting links from specified niche websites.
After sending our URL list to ScrapeBox we can now start searching for specified domains we would like to get links from.
Footprints are (in a nutshell) pieces of code or sentences that appear in a website’s code or in text. For example when somebody creates a WordPress blog, he has “Powered by WordPress” in his footer by default.
Firstly, learn more about Google Search Operators. For your basic link building tasks you should know and understand these three search operators:
Inurl: – shows URLs containing a specified string in their address
Intitle: – shows URLs which have a title optimized for a specified text string
Site: – lists domains/URLs/links from a specified domain, ccTLD etc.
Havinglearnedthabasic of footprintsyou can use them to get specific platforms which will allow you to post a link to your website (or find new customers if you would like to guest blog sometimes).
By using simple footprints like:
“guest blogger” or “guest post” (to search only for links where somebody posted a guest post already – you can also use the allinurl search operator because a lot of blogs have a “guest posts” category which can be found in its URL structure)
Later, combine it with your target keywords and get ready to mail and post fresh guest posts to share your knowledge and services with others!
Now, when your keyword research is done and you have harvested your very first links list, you can start with checking some basic information about the links. Aside from ScrapeBox, you will also need MozAPI.
If you are running a link detox campaign, it’s strongly recommended to use more than one backlink source to get all of the data needed to lift a penalty, for example. For example, if you have more than 40 thousand in each file, you will probably want to merge them into one file and dig into it later.
ScrapeBox allows you to scrape titles and descriptions from your harvested list. To do that, choose the Grab/Check option then, from the drop down menu, “Grab meta info from harvested URLs”:
Here, you can take a look at some example results:
If you were previously harvesting URLs – simply load them from Harvester. If not, you can load them from the text file.
Let’sbegin with Options:
If a link returns HTTP status code different than 301 or 200 it means “Dead” for ScrapeBox.
If you want to be pretty sure that every single intern/external link is alive you can use the “ScrapeBox Alive Checker” addon. First – if you haven’t done this yet – install the Alive Checker addon.
So if you are working on some big onsite changes connected with the total amount of internal pages you will probably want to be pretty sure that Google re-indexes everything. To sure that everything is as it should be, you can use Screaming Frog, SEO Spider and ScrapeBox.
Sometimes it’s not enough to download backlink data from Google Webmaster Tools or some other software made for that
In this case – especially when you are fighting a manual penalty for your site and Google has refused to lift it – go deep into these links and find a pattern that is the same for every single one.
As you can see – ScrapeBox in the Penguin era is still a powerful tool which will speed up your daily SEO tasks if used properly.
If you want to fully understand how to use this innovative tool, I invite you to look a little deeper and read my whole case study.
CHECK IT OUT HERE:
THE ULTIMATE GUIDE TO WHITE HAT SEO USING SCRAPEBOX
Łukasz is especially experienced in on-site and off-site analysis. He is especially intrigued by Conversion Rate Optimization and Web Analytics, and he knows how to use both black and white hat tactics to gain organic traffic. Outside work, Łukasz can be found playing poker or enjoying nature during long walks in the park.