How to Get Google to Crawl the Right Content
Last Edited September 11, 2023 by Garenne Bigby in Search Engine Optimization
How Does Google Find Your Website Content?
Through the use of software programs and algorithms, Google gathers web content so that those using Google to search can readily access the information that they are looking for. Because of this, there is very little effort required by webmasters to make sure that Google retrieves their content.
The process known as crawling is when Google collects all of the web content that is available to the public so that it can be displayed in search results for an appropriate search query. During this crawling process, Google's special software known appropriately as web crawlers, which will automatically discover and acquire websites. A web crawler works by pursuing web links from site to site, and then downloads the pages to store them to be used later. Sorting and analyzing is done by intricate algorithms, and then they are updated within Google's search engine results. Google's main web crawler is known affectionately as Googlebot.
Another process that Google uses to understand web pages is rendering. Rendering helps Google to interpret the way that web pages look and how they behave for visitors that are using different browsers and devices for their internet activities. Similarly to how a web browser will display a page, Google will retrieve the URL and then execute the code file given for that page—this is generally JavaScript or HTML. Google will then crawl every resource contained within the main code file in order to put together all of the visual aspects of the page and to get a better understanding of the website.
When Google is not able to render or crawl a web page, the site's visibility in Google's search results pages can be impacted. First, when Google is not able to crawl a website, it is impossible for them to gather any information about the website. This means that the site, or parts of the site, cannot be discovered in a natural way, therefore it cannot be relayed to Google users that are searching queries that are relevant to these web sites or pages.
Next, if Google is not able to render the web pages contained on a site, it will be a chore to try to understand the web content because important information regarding the visual layout is missing for the web page. If this were to happen, the site's content visibility can be greatly reduced within Google's search pages. Google takes action to render web pages in order to estimate how valuable the website is to varying audiences, and to establish where specific links will be shown within Google's search results pages. Luckily, there is a tool known as Fetch as Google that can help with diagnosing web page's crawling and rendering in order to improve the position of the site within Google's search engine results pages, and work to reach the web site's target audience.
Crawling & Rendering is Important
It is vital to the success of a website that it is able to be crawled and rendered in the correct way, ensuring that it will receive the best efforts from Google search. Though crawling and rendering is very important, it is also important to realize when blocking content from being crawled and rendered will improve the overall success of the website.
You should take the time to confirm that Googlebot and Google's other web crawlers have access to your website on the network level. It is vital that the URLs you would like Google to display in any search results are actually reachable by Google. Often times, URLs are actually blocked on purpose by website owners. Prior to blocking URLs, you need to ensure that this will not hide any content that you would like to be discovered by Google and subsequently displayed in their search results pages.
It is up to the website owner/designer to allow Googlebot access to all of the resources that are referenced on the website. Google considers all of the content that is not text as well as the total visual layout to decide where the website appears within the search results pages. The visual elements of the website help Google to totally understand the web pages. When Google understands a website the best that it can, it is able to better match the website to the individuals that are looking to find the particular content that it offers. After Google has retrieved the pages, Googlebot will run the code and decipher the content to better understand the overall structure of the website. The information that is collected by Google during the rendering process is used to rank the value and quality of the content compared to other websites and what other individuals are searching for using Google's search engine.
If there are web pages on your site that use code to arrange or display the content, Google has to properly render the content in order for it to be displayed in Google search. Many times, the meat of the textual content of a dynamic website may only be retrieved through the rendering of the web pages so Google is able to see the website like any other internet user would. If the website is going through faulty rendering, Google might not be able to retrieve any of the content. To bring this all full circle, when Google is not able to retrieve any of the content from a web page, it cannot know if the information and content within the website is relevant to any specific search queries, and will not show the site within search results.
Blocked Resources Report
Googlebot must have access to various resources on a web page so that it can render and index the page as needed. This includes things like image files, CSS, and JavaScript so that the bot is able to view the page like a normal user would. If a robots.txt file does not allow crawling of those resources, it will impact how well Google will render and index the page, thus impacting how the page is ranked in Google's search engine results.
The blocked resource report displays the resources that are utilized by the site, yet are blocked to Googlebot. Every resource is not shown, only the ones that Google assumes are under the webmaster's control.
- The main report page will show a list of hosts that are providing resources on the site that are blocked. Some of these resources will be hosted on your own site while others will be hosted on other sites.
- Select any host in the table to view a list of resources that are blocked from that host. There will be a tally of pages that are on your site affected by each of the blocked resources.
- Select any of the blocked resources within the table to see a list of pages that will load the resource.
- Select any page within the table that host a blocked resource to get instructions on how to unblock the resource, or follow the instructions below.
In order to unblock your resources, you will need to do the following:
- Engage the Blocked Resources Report to view a list of hosts with blocked resources that are on your website. Begin the hosts that are owned by you, because you can update them on the robots.txt directly. It is possible that you will not have control of all hosts, but you will need to edit the ones that you can.
- Select a host on the report to view a list of blocked resources that come from that host. From the list, begin with the ones that may be affecting your layout and content in a significant way.
- For each resource that is affecting the layout, expand to view the pages that are using it. Click on any of the pages and follow the instructions. After this, fetch and render the page to ensure that the resource will appear.
- Continue this process until Googlebot can access all of the previously blocked resources.
- When you get the hosts that you do not own but have a strong impact on your site visually, contact their webmaster and ask them to unblock the resource from Googlebot. The other alternative is to get rid of your dependency on that resource.
Use Fetch as Google for Websites
Fetch as Google is a tool that will help crawl a web page. It enables any user to test how Google will render or crawl a URL within a website. The tool can be used to determine if Googlebot is able to access a page on the website, how it will render the page, and if any page resources are blocked by Googlebot. Essentially, it simulates a crawl and render process that is done as Google normally would, and is quite useful for ironing out any crawling issues that a website may be having.
Using Fetch as Google is pretty simple, and takes only a few steps to complete.
- In the text box, you will need to enter the path component of a URL on the site to be fetched, relative to the site root. When you leave the text box blank, the site root page will be fetched.
- You may choose the type of Googlebot that you wish to perform the fetch as.
- There are both desktop and mobile options. Which one that you choose will affect the crawler that is making the fetch.
- Choose to simply Fetch or both Fetch and Render.
- When you choose just Fetch, it will take action on a specific URL on the website and will show the HTTP response. This does not run any additional sources for the page. It is a fast process that can be done to diagnose any connectivity issues or security issues that the site may be having. The request will either succeed or fail.
- When you fetch and render, Google will do the same as described above and will then request and run all of the resources on the page. This will discover the visual differences between how a user will see the page and how Googlebot will see the page.
- After this, the request will be placed in the fetch history table, alongside a status of pending. After the request is done, you will be alerted of either the success or failure of the process along with other information. You may receive any additional details on successful requests by clicking on them.
- Google allows 500 fetches weekly. You will be notified when you are reaching the limit.
The last 100 fetch requests will be shown on the fetch history. You may choose to see the details of any completed request, and you may be shown a status of completed, partial, redirected, or specific error type.
If the request has been completed, that means Google contacted, crawled, and can get the resources that have been referenced by the page.
If the fetch status is partial, it means that Google was able to get a response from the site and has fetched the URL, but was not able to get the resources that were referenced by the page. This could happen if they were blocked by certain files. The process was fetch only, try to do a fetch and render. Look at the rendered page to see if there are any important resources that were blocked. If this is the case, unblock them on any robot.txt files that you own. For the ones that you don't own (if any) ask the owners to unblock them.
When you are shown a redirected status, you will have to follow it manually. If this is redirected to the same property, the tool will display an option to allow you to follow the redirect through the population of the fetch box via redirect URL. If the redirect is to another property, you will be able to click on the “Follow” option to auto populate the URL box. Copy the URL and paste it into the fetch box.
Related Articles
- 4 Easy Ways to Search a Website for a Specific Word
- Should You Use Nofollow Links For SEO
- White Hat SEO vs. Black Hat SEO
- Redirection and the Impact on SEO Rankings
- 12 Page Speed Best Practices to Follow for SEO
- All About the Robots.txt File
- Web Accessibility and Search Engine Optimization (SEO)
- What is Speakable and How Can it Help Voice Search SEO?
- How to Prevent Blacklisting When Scraping
- JavaScript (JS) and Search Engine Optimization (SEO)
- What is Negative SEO, and How to Defend Yourself
- The History of SEO and Search Engines
- How to Build a Website for Search Engine Optimization
- Duplicate Content Issues Hurting Your SEO
- Top 10 Backlink Checker Tools for SEO
- Why Does SEO Take So Long for Search Engine Success?
- Top 10 Content Planning Tools for Content Marketing
- Seo Copywriting: How to Write for Search Engine Optimization
- Top 15 Tools for Measuring Website or Application Speed
- Top 25 SEO Tips for eCommerce Websites
- Top 15 Plagiarism Tools for Finding Duplicate Content
- Top 25 SEO Blogs to Stay Up-to-date with Search
- The Best Content Management System (CMS) for SEO
- Social Media & SEO: Why Your SEO Strategy Needs To Include Social
- HTTP or HTTPS? The SEO Impact of Using SSL Certificates
- 35 Amazing Web Analytics Tools that Rival Google Analytics
- 25 Ways to Build Backlinks to Your Website for Free
- What Is Structured Data and Why You Need It for SEO
- 60 Innovative Website Crawlers for Content Monitoring
- Voice Search: How Our Voices Mold the Future of SEO
- Crawl Budgets. How to Stay in Google's Good Graces
- 30 Awesome Keyword Ranking Tools for SEO
- 43 Amazing Websites to Learn SEO Online
- Pagination or Infinite Scroll for SEO?
- How to Get Started Using Google Search Console for SEO
- CMS Plugins for Google AMP (Accelerated Mobile Pages)
- Are Keywords Still Important for SEO?
- Do Sitemaps Help SEO? The Importance of Sitemaps for SEO
- Getting Started with Search Engine Optimization
- SEO vs. PPC: Which One Should You Choose?
Categories
Create Visual Sitemaps
Create, edit, customize, and share visual sitemaps integrated with Google Analytics for easy discovery, planning, and collaboration.