One of the reasons for creating RUMvision is the fact that lab data tests and many other monitoring tools do not take certain things into account. In this case, it's a mistake that could easily be avoided, and if not just as easily rectified. All you need is a real user monitoring (RUM) tool that gives you insights right after deployment.
Here's what we came across
One of our users uses different channels to advertise and run campaigns. All visitors coming in through one of those links have a query string in the URL.
Here's the catch though: although contents could just be the same per URL, each URL might be requested with additional and more importantly different query strings. For example:
- webshop.html?gclid=abc
- webshop.html?fbclid=xyz
And this is frequently where problems start. Although the caching method is unaware, a specific advertisement or campaign won't lead to changing contents on the requested page. It is considered a unique URL.
Not having your cache set up correctly increases the TTFB metric's cost and hurts user engagement and conversion. And also lowers your Core Web Vitals score.
How did we notice this?
As indicated earlier, many things are not taken into account in similar tests. This is why we have 30+ filters and dimensions to spot bottlenecks like this.
One of our filters is called the "TTFB by GET parameters". This shows which query strings cause a high TTFB. In the screenshot below you can see that Facebook Ad clicks have a worse experience.
Lets break it down
The average TTFB is 285ms. This is very neat. You can see that about 85% have a good experience (green). When someone enters without a query string, this is 223ms, even better. What you see next is interesting. We see a gclid and a fbclid. So both are ads that ended up on the same page. However, the difference in the TTFB is huge. The fbclid visitors have an almost 6 times worse experience (2116ms) than those coming through a gclid (366ms), and almost 10 times worse than visitors entering the page without a query string.
How do I avoid this?
Well, very simple actually. Ignore query string parameters that won't affect the contents. Many parameters don't have or need this behavior, even though pagination, search, or filtering parameters in a query string will produce different HTML.
When you are in the process of setting up your caching strategy rule out the following:
- UTM
utm_source
utm_medium
utm_campaign
utm_id
utm_term - Facebook
fbclid - Google Analytics
_gl - Google Ads
gclid
gad_source
dclid
wbraid
gbraid
srsltid - Search Ads
gclsrc - Microsoft
msclkid - Pinterest
epik
pp - Other
channable
awc (awin.com)
dm_i (dot digital email)
These are the most standard ones you will encounter, but there are a few more.
How real user monitoring comes into play
The problem itself can be easily prevented without using real user monitoring, but people make mistakes. No problem, when you can correct them quickly. But if you depend entirely on Google data, which is 28 days delayed, then it may have cost you a lot of money. Very simply put, you are spending money on visitors who are going to have a bad experience. This makes them a lot less likely to convert.
With real user monitoring, you have immediate insight into your data, without having to wait for it. Of course, there have to be visitors, but you get the point. This gives you the opportunity to see if everything still works as it should after the deployment and you can for example in this case immediately adjust your caching strategy and not "throw money away" for 28 days.