9 motives that led to creating RUMvision

Summary: We're here to chase pagespeed. We're not the only one though. We'll explain why we are taking the leap of introducing another pagespeed and RUM solution.

  • by Erwin Hofman
  • Published
  • Reading time ± 5 minutes
  • Core Web Vitals RUM
9 motives that led to creating RUMvision

We should first emphasize that there is a difference between lab data and field data. Lab data won't represent all of your visitors, nor their conditions. Unless you have 1000 different setups for lab data. But even then, you might be missing nuances. That's why our focus started with field data. And as you're then monitoring real user experiences, it is called Real User Monitoring, hence RUM.

And there are other toolings already. Here are 9 motives that led us to create our own:

Because RUM is field data

We like green Lighthouse scores and we can not lie. But there is no relation between Lighthouse score and SEO. That's because Lighthouse is lab data.

Conversion should come from real users though. And it's also what you might want to test for to improve real user experience and ideally conversion. Which explains our focus on RUM.


Not everyone knows the difference between lab and field data though. Some might not even be aware there is such a thing as field data or RUM. They might only be using Lighthouse, which became very well known thanks to PageSpeed Insights. So, while most clients already did synthetic testing to get lab data, most didn't track real user experiences yet.

And other clients actually had an account with toolings that could do both lab data and field data monitoring. But the latter was often too expensive, so they weren't doing that. And that's a shame, as it actually comes with the best insights of real users and varying conditions.

28 day delay

No RUM means that there is no way to track improvements after recommendations were implemented. Although, Google's Core Web Vitals could be considered RUM as well. So, you might actually have field data or RUM already, without knowing it, as Google is using it for ranking.

But Google's data comes with a massive delay. So, you would only know for sure if the deployment went OK or went south, by checking after 28 days of new data. And even then it was hard to tell, as a lot of other things, such as campaigns or deploys, could have happened within those 28 days as well.

So, was the win really the achievement of that specific deployment. And in case things regressed, are you sure development or new hosting + caching configuration is the one to blame.

Not ready yet

Some merchants weren't tracking real user experiences yet because they thought they weren't ready for that step. Sure, it can be confrontational. But you're always ready as your conversion is coming from real users as well.

We even heard this from an international merchant in the nutrition business. Within a few hours after implementing our code, we could already show them a very big bottleneck that nobody spotted before: their Lighthouse setup wasn't testing that specific condition, and they had no idea they should test for it.

Too expensive

Most RUM solutions out there are quite expensive. This might give it the feeling of an elite solution. Maybe giving merchants and agencies the idea that one should look at it regularly to make it worthwhile. Or that it's only for the happy few because of the pricing.

The same applies to product prices by the way: increase your prices and chances are you will pass Core Web Vitals. The reason is that it then only becomes affordable for those that were also willing to spend big bucks on the newest device, hence better user experience. You are scaring others away.

Missing information

Although not everyone was using RUM yet, some actually had Cloudflare Insights or New Relic. But those weren't adding a lot of valuable dimensions to the mix. Such as the impact of ads and campaigns, or unique versus successive pageviews. As a result, you did have overall data, but didn't know what conditions to look into even more.

And some merchants that were running New Relic, weren't actually using it themselves.

Impact on pagespeed

Talking about New Relic, we've had cases where it was actually negatively impacting performance. New Relic involves inlining a large portion of JavaScript, but then the location of the snippet is very important (although they aren't describing the reason, New Relic does say in their docs where the snippet should go). And next to priority, it can even lead to chunked HTML with all its consequences. New Relic doesn't want you to async or defer it.

But New Relic is stuck in 2017. It's perfectly fine to collect data after pageload, without impacting performance yourselves. We saw the same within Shopify shops and their default performance tracking: this also involves quite a bit of inlined JavaScript, actually in the wrong location too.

We didn't want to do this to our clients, as our goal actually is to improve performance rather than messing it up even more.

Too much work

There are fairly easy ways to start monitoring yourself using existing analytics tooling. And free of costs too, if you exclude the time to implement this. For example, Google's own web-vitals library.

Clients are expecting reports in a few week's time though. And when helping them out with recommendations, we had to wait for a product owner or marketer first, to set it up themselves. This took way too much time.

Especially in larger cases, consultants won't get access to any toolings to implement something themselves. Another issue is that not all website or webshop owners are eager to give access to analytics reports to external members. Also note that when collecting a lot of data, it won't be free anymore. So you might want to apply sampling or restricting URLs yourselves.

Not the right data

And even when merchants were using the above Google Analytics solution or other paid solutions, they were sometimes missing data and nuances. We saw this in a case where the LCP metric regressed after implementing best practices. They asked us why. We could only guess as something else could have happened as well:

  • increased amount of unique pageviews without cached resources;
  • new newsletter or search campaigns and ads started
  • or maybe more users on slower connections, that now became patient enough to wait for the page to load, but impacting the average number.

Other toolings are missing these nuances. But these are the most important ones if you ask us. It's actually differentiating us from the competition, making our RUM solution one of a kind.

Share blog post