ReactJS Technical SEO for Real Estate company from 0 to 45000 visitors monthly

ReactJS Technical SEO for Real Estate company

ClientNon-Disclosure Agreement
NicheReal Estate
CountryCanada
CMSReactJS
SkillsTechnical SEO
ServicesSEO Audit
Costs$2500

Introduction

We got a request from a company that deals with real estate service providers in Canada and earn affiliate and promotion fees. The problem was: that even if the site had been up and running for more than one year and the company was constantly creating new content (blog) – they were not receiving any visitors from Google.

Story

The site was created with ReactJS, so we got a few ideas about where the issue could be hidden based on our previous experience. However, as the company is a start-up and uses a domain name that was never used before (according to “whois” history and WebArchive data), there was no need to investigate off-page SEO factors. So, we decided to go on with the technical analysis, as this is the second most common problem when the site is not getting any traffic from Google for an extended period.

Challenge

  • ReactJS required crawling with a JavaScript-enabled browser and sometimes spoofed user-agent. 
  • Google had some (approximately 5%) of pages indexed. So it doesn’t look like a non-indexable site or google penalties/manual actions.
  • We tried (upon agreement with the client) crawling with enormous speed and showed no issues with the web server/hosting.

Solution

Have started by checking Google Search Console. We only noticed that only 5% of pages were discovered by the Google crawler, while other pages were not presented in Google Search Console or the Google Search Engine Ranking page (SERP).

We did a crawl using ScreamingFrog with JavaScript enabled and compared the crawl to pages indexed in Google and mentioned that crawl depth for pages indexed equals the second level. In other words, indexed pages were linked from the homepage. However, Google did not index all pages linked from other pages. And the same time, not all of the second-level pages are indexed.

As the Google bot executes JavaScript, it doesn’t work the same way as a browser. Therefore, there is still a need to make sure that site works without JavaScript. And we compared HTML code with JavaScript turned on and without JavaScript. And found the first issue. Without JavaScript, most of the internal links had “#” as a “href” attribute for “a” HTML tag. What’s interesting, this issue happened only to blog-related pages, and that’s why it was overseen initially when checking the HTML code for the main page.

Next, we switched to directories, the main landing pages for a website, and they were not being indexed either. As the issue with internal links added dynamically with JavaScript was fixed, we could crawl all the pages with and without JavaScript. Testing manually from the browser didn’t show any results; everything worked well. From this point, we decided to look at those pages the same way the Google bot sees them. After using the “Inspect URL” feature in Google Search Console, we noticed that the content was not being loaded. So, as a result, 80% of the total website pages returned precisely the same page (duplicated content) for each URL when the Google bot made the request. While it was working well in the browser and for other user agents.

Further analysis of HTTP requests done from the browser during the page load identified a URL with “/API/” was requested using AJAX (XMLHttp request) amongst hundreds of other assets. Further analysis showed that this was a call to get content for the web page (that is being displayed) to a user. And even if it worked well in the user browser, this API call had X-Robots-Tag HTTP Header (equivalent to robots.txt file or robots meta tag), which had “noindex, nofollow” rules. Also, if we spoof the user-agent in the browser for anything with “bot” inside (like GoogleBot), we will get a 429 HTTP status code (that is an error and means that requests are made too often). That acted the same way as “Disallow” in the robots.txt file and prevented Google Bot from seeing the content. Therefore, when the Google bot tried to render the page and made requests to API to get content, a server blocked this request, and the Google bot was not seeing any response from the content delivery API, and therefore no content on the page. As a result, all pages were the same for Google Bot (about 80% of similar pages on a site).

Result

After all fixed were released, traffic spiked from 0 visitors to 1500 daily (45000 organic visitors per month from organic search)

Want to increase your sales with SEO?
Contact us to learn more about how we can help you.