Crawlability Issues - How to Make Sure That Search Engines Can Spider Your Site
Just one simple mistake can seriously jeopardize your website’s crawlability. Follow these tips to make sure search engines can spider and index your site.
Search engine spiders aren’t THAT smart. Spiders/bots are programmed to constantly crawl around search engines until they find content they can index. They aren’t going to stop and wait around until you fix that broken link. That’s not their job. It’s YOUR job to make sure search engines can spider your site.
In order for your site’s pages to be crawled, they need to take visitors to different locations. Visitors should be able to easily navigate within your site or be taken to an external link/site. If you have bad or broken links, this seriously reduces your site’s crawlability and your site could be permanently filtered out of SERPs (search engine result pages).
Crawlability errors are what kill your SEO – don’t make the following mistakes or say good bye to being found in search engines.
• Pay attention to robots.txt files – That’s the easiest way to mess up your crawlability. Robots.txt files direct search engine spiders how to view your site. One mistake can bump you out of search engines permanently so be extra careful when working with these files.
• Don’t use too many variables in your URLs – Google can crawl long links but it’s still not a fan of long URLs. It’s better to use short URLs – they get clicked on more often in SERPs
• Don’t add Session IDs in URLS – If you do use Session IDs in your URLs, store them in cookies. Spiders won’t crawl URLs with Session IDs – they clog up the SERPs
• Watch out for code bloat – This can be a HUGE problem. Spiders are good at separating code versus content, but don’t make it difficult for them either. Don’t use so much code that your content isn’t easy to find.
How to Fix Crawlability Mistakes
• Do NOT change your URLs unless they are broken. Keep in mind the following tips mentioned above. Keep them short, don’t use too many variables, leave out Session IDs, etc.
• When creating URLs, use keywords and hyphens to separate keywords. Spiders recognize hyphens as spaces – don’t use underscores, etc. Don’t make your URLs look spammy with keywords.
• Fix your domain canonicalization. Are you going to use www.business.com or business.com? (don’t use both!) Which domain gets the most inbound links, etc? That’s the URL you should use as your primary domain. If you use a secondary domain, make sure you include a 301 redirect back to your primary domain.
• Use SEO Browser to check how search engines view your site
The Benefits of Sitemaps
Sitemaps are XML files that list URLS along with each URL’s metadata. This allows search engines to crawl sites efficiently. Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling.
If you want to be found easily by search engines, build an efficient navigation system. A logical and easy navigation structure will make it easier for spiders to crawl around your site.
Learn more about the author, Zeke Camusio.
Comment on this article
No one has posted a comment yet. Be the first!
- crawlability issues
- search engines