Remediate.Co
JavaScript (JS) and Search Engine Optimization (SEO)

JavaScript (JS) and Search Engine Optimization (SEO)

Last Edited September 11, 2023 by Garenne Bigby in Search Engine Optimization

In today’s online environment, there seems to be no end to the kind of websites that web developers can create. Once, developers were limited to only using Hypertext Markup Language and Cascading Style Sheets—HTML and CSS—in constructing sites. Now, however, these two elements are only considered the foundation or the “bones” of a website. The bulk of the programming work of a site or the “muscles” lies with the integration of Javascript. Developers fluent in Javascript can create amazing online content: interactive graphics, embedded videos, and reactive elements, just to name a few examples. With other design tools based in Javascript, the sky is truly the limit when it comes to website design.

Unfortunately, while site design has advanced considerably, the ability of search engines to record and assess it is still lagging behind. Despite the impressive strides it has made in making SEO a much fairer process, it has not fully adapted its protocols to index Javascript-based content properly. For any number of reasons, when Google first encounters any new webpage whose content is rooted in Javascript, it often fails to fully assess that content. As a result, that page can be indexed improperly, and receive a search engine results page (SERP) ranking much lower than it truly deserves.

Naturally, this poses a number of questions for Javascript programmers who aspire to work in website design, as well as web developers who are keenly interested in using scripts in their sites.  For starters, why does Google have so many issues ranking Javascript-based content? What kinds of problems cause Google to rank some Javascript sites higher than others? And, perhaps most important of all, what can developers and/or programmers do to mitigate these problems?

Over the course of this article, we intend to shed light on all of these questions, and their solutions. By no means should this article be considered a comprehensive assessment of Javascript-based SEO; that is a subject which could cover multiple university textbooks and/or formal courses. While we encourage readers to conduct their own research into Javascript and SEO, we hope that the following words inform readers with regards to how they should direct their studies.


JavaScript and SEO


Where the Problems Lie

Any problems that Google protocols have with ranking a Javascript-based website usually break down into four categories:

In a nutshell, errors can arise from discrepancies in what Google sees when ranking a site. Next, what the programmer intended his Javascript code to do may not be what it actually does when Google executes it. Another source of error is the crawlability of the structure of the site. Finally, whether current Google technology can handle the site affects how that site gets ranked. The following few paragraphs will explore each of these in greater detail.


Rendering

Perhaps the first hurdle that Javascript-minded web developers—if not all developers—need to clear is the notion that search engines like Google “see” websites differently.  To be more accurate, the page that Googlebot sees when it first loads that page is not what we users see. Users see the finished product, but Googlebot first sees the source code for that page.

This is what is known as “rendering” the webpage. The process is fairly straightforward, from a procedural point of view. The computer doing the rendering receives the source code, interprets it, and executes it; this involves a combination of HTML, CSS, and Javascript. The HTML forms the foundation, letting the computer know the file is a website. Next, the Javascript is executed, creating the specific content that the developer intended. Finally, the CSS code is executed, placing the needed cosmetic touches on the page.

Overall, there are two types of rendering—server-side and client-side—which differ in exactly where and how the rendering occurs.

  • Server-side rendering has always been the “traditional” means of rendering a web page; it is what most think of when they imagine rendering. In server-side rendering, the computer requesting the web content (the web browser) receives the page with most of its rendering already completed by the computer storing it (the server). All the requesting computer must do is download the CSS file to complete the cosmetic adjustments; the bulk of the rendering work has been done.

  • Client-side rendering is a newer school of thought on the subject. Here, it is the requesting computer (the client) that handles the rendering. Instead of receiving the pre-rendered site, the browser receives the HTML document (the page template) which then refers to the Javascript and CSS files. Initially, the browser produces a blank page from the HTML file, but after downloading the Javascript file, the browser then executes the code and recreates the content, replacing the blank page. After this, it executes the CSS code.

It is client-side rendering that, sadly, is more problematic. As one can imagine, since the first thing Google sees is a blank webpage, if there is any problem with the Javascript code, that blank page is all Google has to rank. Regardless of whether the fault is with the user or Google, the result is the same. With nothing but a blank page to go on, Google will rank the site much more harshly than it truly deserves.

What web developers should take from this is that it is crucial to test-run their websites before putting them online, especially if they are using client-side rendering. If there is any kind of error that could affect the rendering, it must be detected and corrected before Google sees it. It would be better to rewrite a thousand lines of code than put a single website online that gets a poor ranking.


JavaScript Errors

The next obstacle that Javascript web developers often face ties in with the previous one; in fact, this problem often causes or contributes to the first. Though HTML and Javascript work well together, they are fundamentally different in how they handle coding errors. HTML is relatively merciful when it encounters an error, and even when one occurs, users can often figure out what went wrong from the error message.

Javascript, on the other hand, is utterly merciless when it comes to handling errors. Because of its foundations in logic and formulae, it cannot tolerate any deviation from what the user intended; if the user does not follow the formulae exactly, the code will not execute the desired result. As such, if the developer makes even the tiniest error in the code—even if one character is missing—it will yield a Syntax Error, making it impossible for Googlebot to render the site.

When one factors in the inevitable human error, it becomes easy to see why running SEO for Javascript-based sites can be so vexing. If one does a search for Javascript errors, one will encounter a plethora of stories where complex, and otherwise perfect, Javascript programs were stymied by an error on a single line of code. These exercises in frustration are not exclusive to newcomers to programming. Even seasoned masters of Javascript are far from immune to the Syntax Error. Again, this speaks to the importance of thoroughly testing one’s code prior to uploading it.


Crawling and Indexing

To understand the issues that can arise during site-crawling and indexing, one needs a grasp on the crawling process. For websites that use no Javascript, the process is relatively simple and follows almost directly from rendering. The difference is that after Googlebot downloads the site’s HTML, it extracts any external links from that code and visits them all at once. From here, it downloads the CSS code and then sends the entire downloaded site to the indexing program.

When Javascript is involved, however, things get a bit hairier. After downloading the initial HTML, CSS, and JS files, Googlebot must then recruit a rendering service from the indexer.  After this point, the Googlebot must wait until the Javascript code is compiled and executed before it can proceed. Depending on the JS file, the rendering service may need to call additional libraries to execute the code.

Once the code is executed, the indexer can finally do its job, but even then, Googlebot still has work to do. While the indexer works, Googlebot gleans any external links from the rendered website and adds them to its crawling cue. As one can see, Javascript adds a great deal of complexity and wait-time to site-crawling, both of which can lead to errors at any step.

Since it follows the rendering process, site-crawling Javascript sites have many of the same issues.  If the content does not render properly, Google only has a blank HTML file to rank. Furthermore, even if the Javascript code executes without error, the entire process takes far longer than non-Javascript-based websites. This yields two problems for a site’s SEO.  First, one must consider the notion of crawl-budget.

Essentially, Google can only allot so much time to index any given website it encounters. If it takes too long for Google to index a site—such as waiting for a large Javascript file to execute—it will drop that site and move to the next one.  Second, even if the time it takes for Google to index a site falls within budget, there is still the risk that other sites get indexed faster. If the site is a personal blog, then this is not a terrible loss. If the site is for an online business, however, a lower ranking in the SERPs means potential losses of thousands of dollars.


Technical Limitations

The final challenge to Javascript-based SEO comes with the limitations of Google’s indexing technology. While Google Chrome is kept up-to-date on the user’s side, this is regrettably not the case for Google’s crawler and indexer. Googlebot’s technology is based on the 2015 iteration of Chrome – Chrome 41. As such, it does not have access to the full array of features and libraries that its user counterpart does. For web developers, this means that the program that renders, crawls, and indexes their site does not have the same capabilities as their target browsers.

Another technical factor that developers must consider is the fact that Googlebot is not designed to be a browser. Certainly, it shares some characteristics with Chrome, but its ultimate purpose is far removed from a typical browser. In order to efficiently complete its tasks, Googlebot only retrieves the most relevant files from the web—what it considers essential for rendering.

As a result, if Googlebot determines erroneously that a Javascript file is not necessary to render a site, it will not download it. In cases like these, we once again have Google viewing a blank webpage. This is an important factor to keep in mind when incorporating Javascript into a website.


See Through Google’s Eyes

As one might imagine, the only way to know what problems – if any – a site’s Javascript will have is to see what Google sees.  In other words, by viewing a site in the same way that the Googlebot will, developers can make sure their site is what they want Google to see.  One way to do this would be to make use of the Google Console Fetch and Render tool. A better solution, however, would be to download a copy of Chrome 41. This way, users have the very same technology that the Googlebot will use to view a site; the error logs will be exactly the same.  

Furthermore, Chrome 41 is compatible with numerous other tools that a developer can use, such as the Rich Results and Google Mobile tools.  Both of these cover the gaps in Chrome 41’s testing functionality. They also allow users to view their sites from both mobile and desktop perspectives.


Best Practices

In addition to the above points, there are a number of general methods that web developers can make use of. With these as guidelines, web developers have a better chance of avoiding errors that could negatively impact their site’s ranking.

For those who are using Google’s Fetch and Render tool, keep in mind that it can only detect technical problems. In other words, it only determines if Google is technically capable of rendering a site; it makes no assessment of time. As mentioned before, perfect technical execution does not mean timely execution. If users need a sense of how quickly their sites render, they may want to use other tools in tandem with Fetch and Render. Another helpful guideline is to stay aware of Javascript file sizes; larger files need more time.

A helpful way to diagnose any kind of SEO problem with a site is to utilize the Google Search Console. The Console can be used as a kind of thermometer in that it can detect general problems with a site’s SEO. The Console also gives quick access to Fetch and Render, which can spot any direct rendering issues. That said, not every Google tool is completely effective. For instance, while checking Google Cache is helping for spotting rendering issues in HTML, this is not true for Javascript.

One potential problem can arise if users plan to use canonical tags on their website. While these kinds of tags alone are not problematic, if they are injected by Javascript, they can cause issues. Although experts have shown that Google can properly detect Javascript-inserted tags, users should bear in mind that these JS files were devised by masters of the craft. We will not say that using Javascript to inject tags is a guaranteed source of error, but we will advise that placing tags in HTML is a better practice. This way, users avoid a source of error altogether.

A final note is that users should be wary of using Javascript to generate link URLs. The reason for this is because these links can contain hashes or # symbols. This is problematic because Googlebot views hashes in the opposite light as social media does. In other words, whenever Googlebot sees a hashtag symbol, it automatically assumes that the following text is irrelevant, and ignores it.

Therefore, any link that includes a hash, when seen by Googlebot, will be viewed and followed incorrectly. As a result, when Googlebot arrives at the wrong place, it assumes the link is bad and holds it against the site. To keep this from happening, users should ensure that any Javascript-based links do not contain hashes, or simply use non-JS links.


Conclusion

In closing, we wish to reiterate that this article should not be taken as the definitive source for addressing JS-based SEO. Another point worth remembering is that Javascript-based SEO follows the same rules and principles as conventional SEO. When a problem arises, it is easy to assume the problem lies in Javascript.

Users who perform their due diligence, however, may find that trying SEO fixes first can eliminate much of the frustration that comes from JS troubleshooting. In short, if a user is good at conventional SEO, then he or she should have no insurmountable problem with Javascript-based SEO.

 

 

Garenne Bigby
Author: Garenne BigbyWebsite: http://garennebigby.com
Founder of DYNO Mapper and Former Advisory Committee Representative at the W3C.

Back
Remediate.Co

Related Articles

Create Visual Sitemaps

Create, edit, customize, and share visual sitemaps integrated with Google Analytics for easy discovery, planning, and collaboration.

Remediate.Co

Popular Tags

Search Engine Optimization SEO Accessibility Testing Create Sitemaps Sitemaps UX User Experience Sitemap Generator Content Audit Visual Sitemap Generator
Create Interactive Visual Sitemaps

Discovery has never been easier.

Sign up today!