In order to tackle the performance and infrastructure issues caused by the increasing number of Single Page Applications, the Google engineers announced a small revolution in SEO at the GoogleIO last year.
Webmasters are encouraged to use dynamic rendering – serve different content based on user agent detection (Search engines can get a full HTML rendering and client can get a hybrid HTML/JS or a full JS).
At PriceRunner where are running React Js, we have been really excited by the performance improvements this policy change opens for. Logical improvements allowed by dynamic rendering would be:
a. For search engines (SE): serving a 100% server rendered version / No JS.
b. For users: serving an above the fold server rendered + JS
We are in the habit of testing Google recommendations before scaling them, so this time, we tested alternative b.:
Can improving performance for clients (by server rendering solely above the fold) negatively affect SEO?
Test Variant (Y): Clients only get server render above the above the fold + JS
Control (X1): Clients get our standard hybrid rendering*
SE bots: Bots get the same version for both control and variant: our standard hybrid rendering*
*Our standard hybrid rendering means: complete server rendering of the page + JS
Set up of the experiment:
Test variant for URLs representing ~ 50% of visits
Control Group for rest of URLs representing ~ 50% of visits
Search engine get the standard hybrid rendering
Scope : DK, category TV(2), we split tested product pages based on their last product ID
And create 2 groups of equivalent traffic with similar distribution of traffic between pages within them.
Output the experiment:
Data available as attached
Analysis of the experiment:
We used Google Causal Impact to estimates the causal effect of our intervention on our test time series (test variant y).
The model assumes that the outcome of the time series can be explained in terms of a set of control time series not affected by the intervention (our control group – covariate x1)
A month within the test, the predicted time series (Predicted) outperforms our test series (y) by 6,1% with a standard deviation of 2,5% and a posterior probability of causal impact of 99,1%.
So in short, we can assume we would be losing 6% organic traffic by scaling up our test to our entire site for at least a month. We will keep running the test to see if the cost dampens in the second month. We will also run a control experiment to check we get the same negative trend.
Server render above the fold for clients only should not be a bad idea for mobile users. We might be able to compensate the organic traffic loss by providing major performance improvement to the users. Maybe the subject of a next blog post.
Finally, we are going to test: – the other alternative (a.) and serve only 100% HTML for bots (by removing the initial state) – the combination of a. and b. , that might surprise us.
There have been some interesting information being shared at the smx paris 2012… not much – The Google employees that attended didn’t stick around very long – but isn’t networking the real intent of these conferences. Anyway, here is a quick recap of the most interesting decks (in french ofcourse). No pictures taken because I had forgotten my new SLR camera but I had my latopContinue reading “Recap of the SMX paris 2012”