This is why you have an SEO company, like That! Company, to catch and correct what goes wrong with the re-design of an existing website. As anyone that works in the internet marketing industry and has gone through a re-build of a functioning website knows, a myriad of things can go wrong. Recently, a PPC (pay per click) and SEO (search engine optimization) client just finished with the re-build of their website and launched without allowing review. This latest re-build of this client’s site went horribly wrong at launch, so I will touch on some of the expected and unexpected things that can, and may go afoul.
This client purchased an existing business that had an existing ecommerce and informational website and was a leader in their industry. The web developer did not come with the purchase. For some reason never revealed to me, the client was unable to update any pages outside of the shopping cart. The shopping cart was not mobile friendly and we could see the difference in their conversions from desktop versus mobile. Mobile conversions were virtually non-existent. The overwhelming majority of their online sales were being generated by desktop organic search, PPC, direct and referral visits.
Many of the elements needed to improve their SEO, were also just not there. No ability to change meta data, h1 tags, alt/title tags, etc. Most of this data was being generated programmatically; as well as the menus and navigational structure. This new site would also need to be secure. In short, what they needed was a new mobile friendly, secure website and shopping cart with a connector to work with their existing enterprise resource planning (ERP) software suite.
As the leading white label search engine optimization company, we help agencies world-wide deliver outstanding SEO results for their end clients. Can we help you? Learn more about our White Label SEO Services to learn how we help you achieve the desired outcomes fro your clients today.
This client had 3,766 keywords with ranking results in the top one hundred Google SERPs. Five hundred thirty eight (538) keywords with page 1 results, carrying an average monthly search volume of 65,790 and 43 keywords ranking in position one in the Google SERPs which had an average monthly search volume of 6,110. My job was to guide them through the requirements a new administration system would need to provide for the ability to implement SEO moving forward. This also included protecting their existing ranking results as well as possible.
The client ultimately selected an offshore vendor who could provide the integration between their existing ERP software and a new, mobile friendly, secure shopping cart solution and website. Not being familiar with this offshore vendor, the client provided me with a recorded product demo to confirm that the administration system would provide what we needed to implement SEO. After review of the demo, I concluded that we had the elements necessary to assist the client with SEO improvements. We could create our own page titles, meta descriptions and h1 tags. We could update the Google Analytics (GA) code, of which they were running old, outdated GA code and no Search Console account. We had access to the images in order to add properly formatted alt/title text. As we found out later, and much too late, there was even an ability to apply the 301 redirect right on each page. But that is only part of the story.
As it turns out, the developer provided a template, shopping cart and ERP integration. The client was tasked with moving the content (copying old site content and pasting into a new page in the administration system), including the existing meta data. They were also tasked to enter the old site’s URLs into the new page’s data, page by page. It turns out this was complicated by the fact that the client could not copy and paste the old site’s URLs intact. The client took this as standard operating procedure and did not notify anyone of this. We had originally recommended that the developer create a proper 301 redirect file. The rollout timeline did not allow the client to let us review for proper operation. They just launched it and that is where it all went wrong.
Upon notice that the re-built site had launched we began manually testing the original keyword ranking results against the current ranking results. Just to see what the new site looked like and that all data got transferred over. All of the ranking results in the Google SERPs that were still in place resulted in a 404 response code except for the home page. As it turns out, the old site’s URLs were created with .html extensions and the new URLs were not. The administration system simply did not allow for the old URL to be pasted into the 301 redirect field supplied, so the client pasted the old URL in without the .html extension. The client had assumed this was standard operating procedure.
After much internal discussion, we discovered that if you removed the .html extension, the pages would redirect correctly to the secure version of the new URL, in most cases. However, in some cases the old URL, without the .html extension, would redirect to a new, very non search engine friendly URL containing a query string that we had never seen before. In further examination, we found that this new, unknown URL was being generated by the navigation in the main menu. So we had a one to one redirect, in most cases, from the old URL, .html extension removed, to the new secure search engine friendly URL and we were able to navigate to the same content from the main navigation that generated the new non-friendly URL.
Duplicate content? Well, did the rel= canonical tag get placed you may ask? Correctly? No. The rel=canonical tag on the search engine friendly, redirected URL was set to point to the new non-search engine friendly URL containing the query string. In examination of the non-friendly page’s rel=canonical tag, we discovered that this tag referenced an entirely different URL. One containing the category and not the query string. So, one piece of content was being shown for three different URLs, with improperly set rel=canonical tags.
Next, we found that all bots had been disallowed in the robots.txt file. Then we checked the activity in GA. The client was still receiving visits from all sources, but was recording zero conversions. In addition, the client wanted us to push the crawl and indexing which requires Google’s Search Console. The problem here is that the existing GA code was old and there was never a Search Console verification code placed on the site. This is one of those items that the client could not change for reasons never disclosed.
Fortunately, the client had taken our recommendation to implement update their GA code to the latest version. On their own they also added Google’s tag manager. Oops! Possibly Double firing of GA code? With Google Tag Manager and the updated asynchronous GA code in place, we were able to create a new, secure (https vs. http) Search Console account for the client, and then discovered there was no .xml sitemap to submit for a requested crawl.
Upon notification the client communicated with the developer and was given two .xml sitemap URLs. One worked. One did not. The working one had one entry which pointed to the non-working .xml sitemap. The non-working .xml sitemap did not have the proper formatting when viewed in a browser. So we did not submit the provided .xml sitemap at the time.
The Final Result
We notified the client, through staged emails, of what we were finding. First, the issue of the failing redirects and that we found that if we removed the .html extension, they redirected properly. The client notified the developer and the developer responded that you could not put the .html extension in the 301 redirect tool provided. Further discovery revealed that the client had discovered this and thought that this was standard operating procedure.
For some reason, the original website was deleted (big oopsy here, always have a working version ready to fall back on) so we could not pull any of the old URLs to create a new permanent 301 redirect via the .htaccess file. The resolution was to create a new one to one match, old URL vs. new URL, spreadsheet, pulling the landing page data from GA for the past year, for the developer to create a properly working redirect over-riding the 301 redirect in the administration system which was tasked on the client.
Problem solved at additional cost to the client from the developer. Any existing old ranking results with an .html extension began redirecting properly and within 14 days, the ranking results have been replaced with the new secure URLs and, for the most part, very close to the pre-existing ranking result. The rel=canonical tag issue was resolved in an online meeting with the web developer’s sales agent and came down to user input error. There were several fields where data could be entered or selected from an existing choice and the solution required resetting these fields and clearing the cache.
The two additional versions of the friendly, secure URLpromptly went away. Regarding the bot /disallow in the robots.txt, upon notification, the developer quickly resolved this issue.
It was found that the issue with the GA conversion data seems to be isolated to the client’s merchant services provider; which was new and different from the old provider. No one thought to communicate to the merchant services provider that we needed GA code on their checkout page in order to provide the necessary ecommerce data the client needs to make informed business decisions about their marketing efforts. We were not made aware of the existence of a new merchant services provider.
Finally, we had manually created an .xml sitemap file that we wanted uploaded to the server and requested that the developer disable whatever was creating their non-working .xml sitemap. In further discussion with the developer’s sales agent, we were told that we could not upload another .xml sitemap to the server.
After showing the developer’s sales agent the results, he stated that he would look in to it, however, he suggested that we view the source code. When viewing in the source code, the .xml document was properly formatted. Upon seeing this result, we notified Google, via Search Console, that we did in fact have a working .xml sitemap. Finally, over the course of several days, Google finally recorded that we did have a working .xml sitemap and started showing URLs being indexed. However, as previously stated, the correctly formatted .xml sitemap only had one entry pointing to the additional .xml sitemap that wouldn’t resolve in a browser, but properly displayed in the source code.
Well, this issue has turned into a bigger issue as the additional .xml sitemap generated a 500 response code, so there is a problem with the Google agent accessing this area of the site. And, as of today, both .xml sitemaps are generating 500 response codes. The week prior, we had prompted a crawl using the fetch, render, submit tool available to us in the Google Search Console which we believe caused the crawl and indexing of the new site.
So, in closing, if it can go wrong, it will, when re-building your website and hopefully, you will be able to avoid some of these mistakes. Blocking the bots in the robots.txt file and redirecting improperly can put you out of business online or, at least, in jeopardy. If the bots cannot crawl your site, you will ultimately drop from the index and when you drop from the index, unless they come from referral, direct or other non-organic sources, most of organic search visits will become non-existent.
If the results do not redirect properly, Organic visitors may view your site as non-trustworthy. Existing customers who have your site saved as a bookmark, may become frustrated when their bookmark to does not redirect correctly. Not to mention that we had to shut down their PPC campaign in the meantime. Clicking on a paid ad and getting a 404 page not found response is not only frustrating to your visitors, it is expensive! The click costs you money and you get no return on your investment. And, this is why you have us.
– Mark Gray, Senior SEO Manager