Googlebot is the search bot software used by Google, which collects documents from the web to build a searchable index for the Google Search engine.If a webmaster wishes to restrict the information on their site available to a Googlebot, or another well-behaved spider, they can do so with the appropriate directives in a robots.txt file, or by adding the meta tag to the web page. Googlebot requests to Web servers are identifiable by a user-agent string containing "Googlebot" and a host address containing "".Currently, Googlebot follows HREF links and SRC links. There is increasing evidence Googlebot can execute JavaScript and parse content generated by Ajax calls as well.
Posts about Googlebot
  • What You Need to Make Your Blog Mobile-Friendly

    … it yet, then you need to make your website mobile-friendly today. You don’t want to lose potential readers (or customers)! In this post I’ll give you specific options and tips to create an awesome experience for mobile users visiting your business website or blog. Is Mobile Really Rising? The trend is clear, but I still wanted to check…

    Erik Emanuelli/ 28 readers -
  • How To Get Google To Show Your Facebook, Twitter & Other Social Accounts In Its Knowledge Graph

    … Beck is Third Door Media's Social Media Reporter, covering the latest news for Marketing Land and Search Engine Land. He spent 24 years with the Los Angeles Times, serving as social media and reader engagement editor from 2010-2014. A graduate of UC Irvine and the University of Missouri journalism school, Beck started started his career at the Times as a sportswriter and copy editor. Follow Martin on Twitter (@MartinBeck), Facebook and/or Google+. (Some images used under license from…

    Martin Beck/ Marketing Landin Social Google How To's- 24 readers -
  • Chilling Effects Blocks Search Engines from Indexing Entire Site

    … of people whose information appears in the database.” It is surprising that Chilling Effects has blocked the entire domain, rather than simply blocking the notices themselves from being indexed. Some pages are technically indexed however. Just how Google will index pages despite a site blocking Googlebot, Google has now included pages in the index…

    Jennifer Slegg/ The SEM Postin Google- 21 readers -
  • Advanced SEO for JavaScript Sites: Snapshot Pages

    … – still a fairly widespread browser. Potential Issues With Prerendering There are a couple of things to look out for if you decide to go with prerendering: Bot detection. Make sure you are serving to all the bots, not just Googlebot (i.e. Bingbot, et al). Snapshot timing. Consider the fact that your JavaScript elements may take a while to process via…

    Andrew Delamarter/ Search Engine Watchin SEO- 4 readers -
  • 2015 Planning: What Does Your SEO Strategy Look Like Next Year?

    … XML sitemap and note it in both your mobile and desktop robots.txt file Mirror your mobile URLs after your desktop URLs Triple-Checking Your Redirects If you work on large site where URLs have a tendency to change frequently, it’s probably not a bad idea to start the New Year by reviewing your redirect file. Has any redirected changed? Are any…

    Erin Everhart/ Search Engine Watchin SEO- 7 readers -
  • Responsive Design Improves SEO

    … additional website versions were necessary for the content to translate well on mobile devices and tablets. Now, no matter what device a website is displayed on, responsive design automatically adjusts a page in a way that webmasters can retain their content on the same URL. This is less work for Googlebot, as there is no requirements for them to crawl…

    SEO Nickin SEO- 36 readers -
  • Site Audit: Indexing Tips & Tricks with Screaming Frog [VIDEO]

    … and type in /robots.txt. Not sure what a robots.txt file is? “A robots.txt file is a text file that stops web crawler software, such as Googlebot, from crawling certain pages of your site. The file is essentially a list of commands, such Allow and Disallow, that tell web crawlers which URLs they can or cannot retrieve. So, if a URL is disallowed…

    Tori Cushing/ AuthorityLabs- 47 readers -
  • Optimising Demandware for SEO

    … exists on Clarins whereby a “?start=” parameter is appended to product page URLs. This parameter relates to the grid position of each product on the page i.e: Given the constantly changing page position of products week to week, you can see why this parameter has the potential to create a large amount of duplicate pages. Although the “daily energizer…

    Sophie Webb/ SEOgadgetin SEO EMail- 49 readers -
Get the top posts daily into your mailbox!
More from around the web