Crawl directives archives

Recent Crawl directives articles




Pagination & SEO: best practices

Paginated archives have long been a topic of discussion in the SEO community. Over time, best practices for optimization have evolved, and we now have pretty clear definitions. This post explains what these best practices are. It’s good to know that Yoast SEO applies all these rules to every archive with pagination. Indicate that an …

Read: "Pagination & SEO: best practices"
Pagination & SEO: best practices

Closing a spider trap: fix crawl inefficiencies

Quite some time ago, we made a few changes to how yoast.com is run as a shop and how it’s hosted. In that process, we accidentally removed our robots.txt file and caused a so-called spider trap to open. In this post, I’ll show you what a spider trap is, why it’s problematic and how you …

Read: "Closing a spider trap: fix crawl inefficiencies"
Closing a spider trap: fix crawl inefficiencies


What’s the X-Robots-Tag HTTP header? And how to use it?

3 January 2017 Maria Gomez Benitez

Traditionally, you will use a robots.txt file on your server to manage what pages, folders, subdomains, or other content search engines will be allowed to crawl. But did you know there’s also such a thing as the X-Robots-Tag HTTP header? Here, we’ll discuss the possibilities and how this might be a better option for your …

Read: "What’s the X-Robots-Tag HTTP header? And how to use it?"
What’s the X-Robots-Tag HTTP header? And how to use it?

Google Panda 4, and blocking your CSS & JS

A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …

Read: "Google Panda 4, and blocking your CSS & JS"
Google Panda 4, and blocking your CSS & JS