John Q. Public owns a website example.com and would like to get a list of all pages on his site that are indexed in Google search. He would also like to monitor his web page rankings in Google for particular search keywords vis-a-vis other rival websites.
There are powerful command-line tools like curl and wget that one can use to download Google search result pages automatically. The HTML pages can then be parsed using Python’s Beautiful Soup library or the Simple HTML DOM parser of PHP but these methods are too technical and involve coding. The other issue is that Google is very likely to temporarily block your IP address should you send them a couple of automated requests in quick succession.
Web Scraping Google using Google Docs
If you ever need to extract data from Google search results, Google offers a free tool that might just do the job. It’s called Google Docs and since docs will be fetching Google search pages within Google’s own network, the scraping requests are less likely to get blocked.
The idea is simple. The Google Sheet will fetch and import Google search results using the built-in ImportXML function. It then extracts the page titles and URLs using XPath expressions and then grabs the favicons of the web domain using another Google’s own favicon converter.
You can further customize the Google Search results by changing the sort order – you can sort results by relevance or by date published – the results can be restricted to pages that were published in the last hour, week, month or year. The number of results appearing in search results can be modified as well.
To get started, open this Google sheet and choose File -> Make a copy to clone the sheet in your Google Drive. You can now play with the various parameters in cells that are highlighted in light blue color.
Spreadsheet Functions for Scraping Web Pages
Writing a scraping tool with Google sheets is simple and involve a few formulas and built-in functions. Here’s how it was done:
1. Construct the Google Search URL with the search query and sorting parameters. You can also use advanced Google search operators like site, inurl, around and others.
http://ift.tt/1fuqcXQ
2. Get the title of pages in search results using the XPath //h3 (in Google search results, all titles are served inside the H3 tag).
=IMPORTXML(STEP1, "//h3[@class='r']")
3. Get the URL of pages in search results using another XPath expression
=IMPORTXML(STEP1, "//h3/a/@href")
4. All external URLs in Google Search results have tracking enabled and we’ll use Regular Expression to extract clean URLs.
=REGEXEXTRACT(STEP3, "\/url\?q=(.+)&sa")
5. Now that we have the page URL, we can again use Regular Expression to extract the website domain from the URL.
=REGEXEXTRACT(STEP4, "https?:\/\/(.[^\/]+)")
6. And finally, we can use this website with Google’s S2 Favicon converter to show the favicon image of the website in the sheet.
=IMAGE(CONCAT("http://ift.tt/ArJoPd", STEP5))
And now that you have the Google Search results inside the sheet, you can export the data as a CSV file, publish the sheet as an HTML page (and it will refresh automatically) or you can go a step further and write a Google Script that will send you the sheet as PDF daily.
This story, Scrape Google Using Another Google Product, was originally published at Digital Inspiration on 13/03/2014 under Google, Google Docs, Internet
No comments:
Post a Comment