And let's be honest: in most parts of the world, nowadays the internet speed isn't the biggest challenge. It's especially devices and their CPU boundaries. Combining that with the amount of JavaScript that websites are using already, and it could start to hurt your Lighthouse score even more.
So, your next thought might be: let's lazyload what we can, maybe even until user interaction. Would this be wise to do?
Missing valuable information
From a technical perspective, it's possible to lazyload anything. Including Real User Monitoring providers. But not only would you be missing information, the information that you're about to miss might even be your more valuable information. Especially UX information of people with the slowest or worst experiences. And this is why:
Slow experiences are more likely to result in a bounce
Try to think of the way users behave. As pagespeed isn't only impacting SEO, but also UX and bounce, it's the people with the worst experiences that are more likely to not interact with your webpage at all.
So, loading an analytics third party provider on user interaction is more likely to miss valuable information. Your UX or bounce might then look better then it is in reality. You'll never know about their real experience and potential KPI's will be quite off target.
A browser won't know it needs the file in advance
Even if people do interact and leave right away, loading a third party with a delay could cause it to not be downloaded and executed yet. Because in the end, a browser doesn't know in advance it needs that file until user interaction is initiating the download. If you know the exact path of the resource, prefetching a file could help in this specific scenario though.
You need a sufficient amount of pageviews
Especially in case a below-average session count on your website, delaying monitoring solutions will reduce the amount of data even more. But in the end, you need a sufficient amount of pageviews when it comes pagespeed monitoring. Otherwise, you might not be able to draw solid conclusions.
RUMvision is already optimized
Sure, less JavaScript is better. And although we're too new to be in this performance list of analytic providers, Dynatrace does prove that there are performance offenders amongst performance monitoring solutions. And although the JavaScript impact of New Relic can be low, we've seen chunked HTML issues within websites using New Relic.
Nevertheless, we already took great care in doing this differently.
We are using modern browser API's
Most browsers are doing an awesome job when it comes to monitoring performance. Safari is a bit behind though. Most other browsers are allowing tools to fetch the information using browser API's, even though the events, such as TTFB, already happend a while ago. This means that analytic scripts using these API's don't need to be inlined nor need the highest priority.
Moreover, our monitoring solution is using the Google's webvitals library is its fundament. That means that some metric-data already is sent with a delay. For example, the following metrics are only submitted on user interaction.
Dynamic scripts are asynchronous by default
First of all, our script doesn't need to be inlined. Well, besides the snippet responsible for injecting the actual script. And browsers are treating dynamic scripts as async by default. So, no harm there.
But maybe you want to delay our snippet a bit more. Just to be sure that your own files are prioritized. You could then move the snippet to the bottom of your source code so that browsers aren't able to detect and parse it right away. We don't advice using additional techniques, such as setTimeout
.
Trigger types in Google Tag Manager
In Google Tag Manager, one can use different trigger types. Pageview being the one with the highest priority. There also is Dom Ready and Window Loaded. In general, it is advices to use Window Loaded for non-critical and non-analytic third parties. Such as reviews, your chat widget, and Facebook pixel.
However, when it comes to analytic providers, you generally want to have trustworthy data. You should then use Pageview. A/B testing best practices is a different topic all together.
Configure your snippet
The number one cause of performance issues? Loading a whole library while only parts are needed. Anyone that has looked into the Coverage tab of Chrome DevTools will confirm (don't do this though, it's no fun at all to see those numbers of unused bytes).
Unfortunately, most third party marketing tools such as cookie notices and chat widgets, but even monitoring tools are doing this. We though we should be doing this more considerate. So, our JavaScript that you embed will only contain what you configured in your domain's snippet configuration. As a matter of fact, you'll exactly see the amount of bytes that are involved, when configuring your snippet.
We already do conditionally loading ourselves
We really like this smart little trick: because website owners can insert their URL's or URL rules (such as regular expressions), we know exactly when our script is needed. This means our script needs to be loaded the first time to get this kind of information.
But then, when your users are navigating to a page that doesn't match your URL rules, then the snippet won't embed the bigger JavaScript file. Smart, right?
Conclusion
From a technical and Lighthouse perspective, it won't hurt at all to lazyload RUM scripts. In the end, performance is the sum of multiple optimizations.
However, from an analytic perspective, it won't be wise to do so. Truth to be told though, there can be quite some differences between analytic providers, so pick them carefully.