- Our Blog
We regularly consult for sites that monetize, in part, with affiliate links. We usually advise people to redirect affiliate links. In the past, we noticed that there wasn’t a proper script available online that could handle this for us, so we created one to tackle this problem. In this post, I explain how you can get your hands on it and how you can get it running on your website.
Traditionally, you will use a robots.txt file on your server to manage what pages, folders, subdomains or other content search engines will be allowed to crawl. But did you know that there’s also such a thing as the X-Robots-Tag HTTP header? In this post we’ll discuss what the possibilities are and how this might be a better option for your blog. Quick recap: robots.
… Google doesn’t always spider every page on a site instantly. In fact, sometimes it can take weeks. This might get in the way of your SEO efforts. Your newly optimized landing page might not get indexed. At that point, it becomes time to optimize your crawl budget. Crawl budget is the time Google has in a given period to crawl your site. It might…
…: not spreading link value robots.txt syntax User-agent directive The most common user agents for search engine spiders Disallow directive How to use wildcards / regular expressions Non-standard robots.txt crawl directives Allow directive noindex directive host directive crawl-delay directive sitemap directive for XML…
… In February 2009, six years to the day from when this is published, Google, Bing and Yahoo! introduced the rel=canonical link element (Matt’s post is probably the easiest reading). While the idea is simple, the specifics of how to use it turn out to be complex. The basic premise is: if you have several similar versions of the same content, you…