The Fight For The Main Thread<\/h1>\nGeoff Graham<\/address>\n 2023-10-24T18:00:00+00:00
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
This article is sponsored by SpeedCurve<\/b><\/p>\n
Performance work is one of those things, as they say, that ought to happen in development. You know, have a plan for it and write code that\u2019s mindful about adding extra weight to the page.<\/p>\n
But not everything about performance happens directly at the code level, right? I\u2019d say many — if not most — sites and apps rely on some number of third-party scripts where we might not have any influence over the code. Analytics is a good example. Writing a hand-spun analytics tracking dashboard isn\u2019t what my clients really want to pay me for, so I\u2019ll drop in the ol\u2019 Google Analytics script and maybe never think of it again.<\/p>\n
That\u2019s one example and a common one at that. But what\u2019s also common is managing multiple third-party scripts on a single page. One of my clients is big into user tracking, so in addition to a script for analytics, they\u2019re also running third-party scripts for heatmaps, cart abandonments, and personalized recommendations — typical e-commerce stuff. All of that is dumped on any given page in one fell swoop courtesy of Google Tag Manager (GTM), which allows us to deploy and run scripts without having to go through the pain of re-deploying the entire site.<\/p>\n
As a result, adding and executing scripts is a fairly trivial task. It is so effortless, in fact, that even non-developers on the team have contributed their own fair share of scripts, many of which I have no clue what they do. The boss wants something, and it\u2019s going to happen one way or another, and GTM facilitates that work without friction between teams.<\/p>\n
All of this adds up to what I often hear described as a \u201cfight for the main thread.\u201d That\u2019s when I started hearing more performance-related jargon, like web workers, Core Web Vitals, deferring scripts, and using pre-connect, among others. But what I\u2019ve started learning is that these technical terms for performance make up an arsenal of tools to combat performance bottlenecks.<\/strong><\/p>\nThe real fight, it seems, is evaluating our needs as developers and stakeholders against a user\u2019s needs, namely, the need for a fast and frictionless page load.<\/p>\n
Fighting For The Main Thread<\/h2>\n
We\u2019re talking about performance in the context of JavaScript, but there are lots of things that happen during a page load. The HTML is parsed. Same deal with CSS. Elements are rendered. JavaScript is loaded, and scripts are executed.<\/p>\n
All of this happens on the main thread<\/strong>. I\u2019ve heard the main thread described as a highway that gets cars from Point A to Point B; the more cars that are added to the road, the more crowded it gets and the more time it takes for cars to complete their trip. That\u2019s accurate, I think, but we can take it a little further because this particular highway has just one lane<\/em>, and it only goes in one direction<\/em>. My mind thinks of San Francisco\u2019s Lombard Street, a twisty one-way path of a tourist trap on a steep decline.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Credit: Brandon Nelson<\/a> on Unsplash<\/a>. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThe main thread may not be that curvy, but you get the point: there\u2019s only one way to go, and everything that enters it must go through it.<\/p>\n
JavaScript operates in much the same way. It\u2019s \u201csingle-threaded,\u201d which is how we get the one-way street comparison. I like how Brian Barbour explains it<\/a>:<\/p>\n\u201cThis means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece of code before moving on to the next. It’s synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meantime.\u201d<\/p>\n
— Brian Barbour<\/p><\/blockquote>\n
So, there we have it: a fight for the main thread. Each resource on a page is a contender vying for a spot on the thread and wants to run first. If one contender takes its sweet time doing its job, then the contenders behind it in line just have to wait.<\/p>\n
Monitoring The Main Thread<\/h2>\n
If you\u2019re like me, I immediately reach for DevTools and open the Lighthouse tab when I need to look into a site\u2019s performance. It covers a lot of ground, like reporting stats about a page\u2019s load time that include Time to First Byte (TTFB)<\/strong>, First Contentful Paint (FCP)<\/strong>, Largest Contentful Paint (LCP)<\/strong>, Cumulative Layout Shift (CLS)<\/strong>, and so on.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Hey, look at that — great job, team! (Large preview<\/a>)
\n <\/figcaption><\/figure>\nI love this stuff! But I also am scared to death of it. I mean, this is stuff for back-end engineers, right? A measly front-end designer like me can be blissfully ignorant of all this mumbo-jumbo.<\/p>\n
Meh, untrue. Like accessibility, performance is everyone\u2019s job because everyone\u2019s work contributes to it. Even the choice to use a particular CSS framework influences performance.<\/p>\n
Total Blocking Time<\/h3>\n
One thing I know would be more helpful than a set of Core Web Vitals scores from Lighthouse is knowing the time it takes to go from the First Contentful Paint (FCP) to the Time to Interactive (TTI), a metric known as the Total Blocking Time (TBT)<\/strong>. You can see that Lighthouse does indeed provide that metric. Let\u2019s look at it for a site that\u2019s much \u201cheavier\u201d than Smashing Magazine.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThere we go. The problem with the Lighthouse report, though, is that I have no idea what is causing that TBT. We can get a better view if we run the same test in another service, like SpeedCurve<\/a>, which digs deeper into the metric. We can expand the metric to glean insights into what exactly is causing traffic on the main thread.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThat\u2019s a nice big view and is a good illustration of TBT\u2019s impact on page speed. The user is forced to wait a whopping 4.1 seconds between the time the first significant piece of content loads and the time the page becomes interactive. That\u2019s a lifetime in web seconds, particularly considering that this test is based on a desktop experience on a high-speed connection.<\/p>\n
One of my favorite charts in SpeedCurve is this one showing the distribution of Core Web Vitals metrics during render. You can see the delta between contentful paints and interaction!<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nSpotting Long Tasks<\/h3>\n
What I really want to see is JavaScript, which takes more than 50ms to run. These are called long tasks<\/strong><\/a>, and they contribute the most strain on the main thread. If I scroll down further into the report, all of the long tasks are highlighted in red.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nAnother way I can evaluate scripts is by opening up the Waterfall View. The default view is helpful to see where a particular event happens in the timeline.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nBut wait! This report can be expanded to see not only what is loaded at the various points in time but whether they are blocking the thread and by how much. Most important are the assets that come before the FCP.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nFirst & Third Party Scripts<\/h3>\n
I can see right off the bat that Optimizely is serving a render-blocking script. SpeedCurve can go even deeper by distinguishing between first- and third-party scripts.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThat way, I can see more detail about what\u2019s happening on the Optimizely side of things.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nMonitoring Blocking Scripts<\/h3>\n
With that in place, SpeedCurve actually lets me track all the resources from a specific third-party source in a custom graph that offers me many more data points to evaluate. For example, I can dive into scripts that come from Optimizely with a set of custom filters to compare them with overall requests and sizes.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThis provides a nice way to compare the impact of different third-party scripts that represent blocking and long tasks, like how much time those long tasks represent.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nOr perhaps which of these sources are actually render-blocking:<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThese are the kinds of tools that allow us to identify bottlenecks and make a case for optimizing them or removing them altogether. SpeedCurve allows me to monitor this over time, giving me better insight into the performance of those assets.<\/p>\n
Monitoring Interaction to Next Paint<\/h3>\n
There\u2019s going to be a new way to gain insights into main thread traffic when Interaction to Next Paint (INP)<\/a> is released as a new core vital metric in March 2024. It replaces the First Input Delay (FI<\/a>D<\/a>)<\/a> metric.<\/p>\nWhat\u2019s so important about that? Well, FID has been used to measure load responsiveness<\/strong>, which is a fancy way of saying it looks at how fast the browser loads the first user interaction on the page. And by interaction<\/em>, we mean some action the user takes that triggers an event, such as a click<\/code>, mousedown<\/code>, keydown<\/code>, or pointerdown<\/code> event. FID looks at the time the user sparks an interaction and how long the browser processes — or responds to — that input.<\/p>\nFID might easily be overlooked when trying to diagnose long tasks on the main thread because it looks at the amount of time a user spends waiting after interacting with the page rather than the time it takes to render the page itself. It can\u2019t be replicated with lab data because it\u2019s based on a real user interaction. That said, FID is correlated to TBT in that the higher the FID, the higher the TBT, and vice versa. So, TBT is often the go-to metric for identifying long tasks because it can be measured with lab data as well as real-user monitoring (RUM).<\/p>\n
But FID is wrought with limitations<\/a>, the most significant perhaps being that it\u2019s only a measure of the first<\/em> interaction. That\u2019s where INP comes into play. Instead of measuring the first interaction and only the first interaction, it measures all<\/em> interactions on a page. Jeremy Wagner has a more articulate explanation:<\/p>\n\u201cThe goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible for all or most interactions the user makes.\u201d
— Jeremy Wagner<\/p><\/blockquote>\n
Some interactions are naturally going to take longer to respond than others. So, we might think of FID as merely a first impression of responsiveness, whereas INP is a more complete picture. And like FID, the INP score is closely correlated with TBT but even more so, as Annie Sullivan reports:<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThankfully, performance tools are already beginning to bake INP into their reports. SpeedCurve is indeed one of them, and its report shows how its RUM capabilities can be used to illustrate the correlation between INP and long tasks on the main thread. This correlation chart illustrates how INP gets worse as the total long tasks\u2019 time increases.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nWhat\u2019s cool about this report is that it is always collecting data, providing a way to monitor INP and its relationship to long tasks over time.<\/p>\n
Not All Scripts Are Created Equal<\/h2>\n
There is such a thing as a \u201cgood\u201d script. It\u2019s not like I\u2019m some anti-JavaScript bloke intent on getting scripts off the web. But what constitutes a \u201cgood\u201d one is nuanced.<\/p>\n
Who\u2019s It Serving?<\/h3>\n
Some scripts benefit the organization, and others benefit the user (or both). The challenge is balancing business needs with user needs.<\/p>\n
I think web fonts are a good example that serves both needs. A font is a branding consideration as well as a design asset that can enhance the legibility of a site\u2019s content. Something like that might make loading a font script or file worth its cost to page performance. That\u2019s a tough one. So, rather than fully eliminating a font, maybe it can be optimized instead, perhaps by self-hosting the files rather than connecting to a third-party domain or only loading a subset of characters.<\/p>\n
Analytics is another difficult choice. I removed analytics from my personal site long ago because I rarely, if ever, looked at them. And even if I did, the stats were more of an ego booster than insightful details that helped me improve the user experience. It\u2019s an easy decision for me, but not so easy for a site that lives and dies by reports that are used to identify and scope improvements.<\/p>\n
If the script is really being used to benefit the user at the end of the day, then yeah, it\u2019s worth keeping around.<\/p>\n
When Is It Served?<\/h3>\n
A script may very well serve a valid purpose and benefit both the organization and the end user. But does it need to load first before anything else? That\u2019s the sort of question to ask when a script might be useful, but can certainly jump out of line to let others run first.<\/p>\n
I think of chat widgets for customer support. Yes, having a persistent and convenient way for customers to get in touch with support is going to be important, particularly for e-commerce and SaaS-based services. But does it need to be available immediately<\/em>? Probably not. You\u2019ll probably have a greater case for getting the site to a state that the user can interact with compared to getting a third-party widget up front and center. There\u2019s little point in rendering the widget if the rest of the site is inaccessible anyway. It is better to get things moving first by prioritizing some scripts ahead of others.<\/p>\nWhere Is It Served From?<\/h3>\n
Just because a script comes from a third party doesn\u2019t mean it has to be hosted by a third party. The web fonts example from earlier applies. Can the font files be self-hosted instead rather than needing to establish another outside connection? It\u2019s worth asking. There are self-hosted alternatives to Google Analytics, after all. And even GTM can be self-hosted<\/a>! That\u2019s why grouping first and third-party scripts in SpeedCurve\u2019s reporting is so useful: spot what<\/em> is being served and where<\/em> it is coming from and identify possible opportunities.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nWhat Is It Serving?<\/h3>\n
Loading one script can bring unexpected visitors along for the ride. I think the classic case is a third-party script that loads its own assets, like a stylesheet. Even if you think you\u2019re only loading one stylesheet &mdahs; your own — it\u2019s very possible that a script loads additional external stylesheets, all of which need to be downloaded and rendered.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nGetting JavaScript Off The Main Thread<\/h2>\n
That\u2019s the goal! We want fewer cars on the road to alleviate traffic on the main thread. There are a bunch of technical ways to go about it. I\u2019m not here to write up a definitive guide of technical approaches for optimizing the main thread, but there is a wealth of material on the topic.<\/p>\n
I\u2019ll break down several different approaches and fill them in with resources that do a great job explaining them in full.<\/p>\n
Use Web Workers<\/h3>\n
A web worker, at its most basic, allows us to establish separate threads that handle tasks off the main thread. Web workers run parallel to the main thread. There are limitations to them, of course, most notably not having direct access to the DOM and being unable to share variables with other threads. But using them can be an effective way to re-route traffic from the main thread to other streets, so to speak.<\/p>\n
\n- Web Workers<\/a> (HTML Living Standard)<\/li>\n
- \u201cThe Difference Between Web Sockets, Web Workers, and Service Workers<\/a>,\u201d Aisha Bukar<\/li>\n
- Using Web Workers<\/a> (MDN)<\/li>\n
- \u201cUse Web Workers to Run JavaScript Off the Browser’s Main Thread<\/a>,\u201d Dave Surma<\/li>\n
- \u201cManaging Long-Running Tasks In A React App With Web Workers<\/a>,\u201d Chidi Orji<\/li>\n
- \u201cExploring The Potential Of Web Workers For Multithreading On The Web<\/a>,\u201d Sarah Oke Okolo<\/li>\n
- \u201cThe Basics of Web Workers<\/a>,\u201d Malte Ubl and Eiji Kitamura<\/li>\n<\/ul>\n
Split JavaScript Bundles Into Individual Pieces<\/h3>\n
The basic idea is to avoid bundling JavaScript as a monolithic concatenated file in favor of \u201ccode splitting\u201d or splitting the bundle up into separate, smaller payloads to send only the code that\u2019s needed. This reduces the amount of JavaScript that needs to be parsed, which improves traffic along the main thread.<\/p>\n
\n- \u201cReduce JavaScript Payloads With Code Splitting<\/a>,\u201d Houssein Djirdeh and Jeremy Wagner<\/li>\n
- \u201cWhat Is Code Splitting?<\/a>,\u201d Next.js<\/li>\n
- \u201cImproving JavaScript Bundle Performance With Code-Splitting<\/a>,\u201d Adrian Bece<\/li>\n
- \u201cCode Splitting With Vanilla JS<\/a>,\u201d Chris Ferdinandi<\/li>\n
- \u201cSupercharged Live Stream Blog — Code Splitting<\/a>,\u201d Dave Surma<\/li>\n<\/ul>\n
Async or Defer Scripts<\/h3>\n
Both are ways to load JavaScript without blocking the DOM. But they are different! Adding the async<\/code> attribute to a <script><\/code> tag will load the script asynchronously, executing it as soon as it\u2019s downloaded. That\u2019s different from the defer<\/code> attribute, which is also asynchronous but waits until the DOM is fully loaded before it executes.<\/p>\n\n- \u201cHow And When To Use Async And Defer Attributes<\/a>,\u201d Zell Liew<\/li>\n
- \u201cEliminate Render-Blocking JavaScript With Async And Defer<\/a>,\u201d (DigitalOcean)<\/li>\n
- \u201cOptimize Long Tasks<\/a>,\u201d Jeremy Wagner<\/li>\n
- \u201cEfficiently Load Third-party JavaScript<\/a>,\u201d Milica Mihajlija<\/li>\n
- Scripts: async, defer<\/a> (JavaScript.info)<\/li>\n<\/ul>\n
Preconnect Network Connections<\/h3>\n
I guess I could have filed this with async<\/code> and defer<\/code>. That\u2019s because preconnect<\/code> is a value on the rel<\/code> attribute that\u2019s used on a <link><\/code> tag. It gives the browser a hint that you plan to connect to another domain. It establishes the connection as soon as possible prior to actually downloading the resource. The connection is done in advance, allowing the full script to download later.<\/p>\nWhile it sounds excellent — and it is — pre-connecting comes with an unfortunate downside in that it exposes a user\u2019s IP address to third-party resources used on the page, which is a breach of GDPR compliance<\/a>. There was a little uproar over that when it was found out that using a Google Fonts script is prone to that as well<\/a>.<\/p>\n\n- \u201cEstablish Network Connections Early to Improve Perceived Page Speed<\/a>,\u201d Milica Mihajlija and Jeremy Wagner<\/li>\n
- \u201cPrioritize Resources<\/a>,\u201d S\u00e9rgio Gomes<\/li>\n
- \u201cImproving Perceived Performance With the Link Rel=preconnect HTTP Header<\/a>,\u201d Andy Davies<\/li>\n
- \u201cExperimenting With Link Rel=preconnect Using Custom Script Injection in WebPageTest<\/a>,\u201d Andy Davies<\/li>\n
- \u201cFaster Page Loads Using Server Think-time With Early Hints<\/a>,\u201d Kenji Baheux<\/li>\n
- rel=preconnect<\/a> (MDN)<\/li>\n<\/ul>\n
Non-Technical Approaches<\/h2>\n
I often think of a Yiddish proverb I first saw in Malcolm Gladwell\u2019s Outliers<\/em><\/a>; however, many years ago it came out:<\/p>\nTo a worm in horseradish, the whole world is horseradish.<\/p><\/blockquote>\n
It\u2019s a more pleasing and articulate version of the saying that goes, \u201cTo a carpenter, every problem looks like a nail.\u201d So, too, it is for developers working on performance. To us, every problem is code that needs a technical solution. But there are indeed ways to reduce the amount of work happening on the main thread without having to touch code directly.<\/p>\n
We discussed earlier that performance is not only a developer\u2019s job; it\u2019s everyone\u2019s responsibility. So, think of these as strategies that encourage a \u201cculture\u201d of good performance in an organization.<\/p>\n
Nuke Scripts That Lack Purpose<\/h3>\n
As I said at the start of this article, there are some scripts on the projects I work on that I have no idea what they do. It\u2019s not because I don\u2019t care. It\u2019s because GTM makes it ridiculously easy to inject scripts on a page, and more than one person can access it across multiple teams.<\/p>\n
So, maybe compile a list of all the third-party and render-blocking scripts and figure out who owns them. Is it Dave in DevOps? Marcia in Marketing? Is it someone else entirely? You gotta make friends with them. That way, there can be an honest evaluation of which scripts are actually helping and are critical to balance.<\/p>\n
Bend Google Tag Manager To Your Will<\/h3>\n
Or any tag manager, for that matter. Tag managers have a pretty bad reputation for adding bloat to a page. It\u2019s true; they can definitely make the page size balloon as more and more scripts are injected.<\/p>\n
But that reputation is not totally warranted because, like most tools, you have to use them responsibly. Sure, the beauty of something like GTM is how easy it makes adding scripts to a page. That\u2019s the \u201cTag\u201d in Google Tag Manager. But the real<\/em> beauty is that convenience, plus<\/em> the features it provides to manage the scripts. You know, the \u201cManage\u201d in Google Tag Manager.<\/em> It\u2019s spelled out right on the tin!<\/p>\n\n- \u201cBest Practices For Tags And Tag Managers<\/a>,\u201d Katie Hempenius and Barry Pollard<\/li>\n
- \u201cTechniques on How to Improve Your GTM<\/a>,\u201d Ryan Rosati<\/li>\n
- \u201cKeeping Websites Fast when Loading Google Tag Manager<\/a>,\u201d H\u00e5kon Gullord Krogh<\/li>\n
- \u201cOptimizing Page Speed with Google Tag Manager<\/a>,\u201d Charlie Weller<\/li>\n
- Custom event trigger<\/a> (Tag Manager Help)<\/li>\n<\/ul>\n
Wrapping Up<\/h2>\n
Phew! Performance is not exactly a straightforward science. There are objective ways to measure performance, of course, but if I\u2019ve learned anything about it, it\u2019s that subjectivity is a big part of the process. Different scripts are of different sizes and consist of different resources serving different needs that have different priorities for different organizations and their users.<\/p>\n
Having access to a free reporting tool like Lighthouse in DevTools is a great start for diagnosing performance issues by identifying bottlenecks on the main thread. Even better are paid tools like SpeedCurve<\/a> to dig deeper into the data for more targeted insights and to produce visual reports to help make a case for performance improvements for your team and other stakeholders.<\/p>\nWhile I wish there were some sort of silver bullet to guarantee good performance, I\u2019ll gladly take these and similar tools as a starting point. Most important, though, is having a performance game plan that is served by the tools. And Vitaly\u2019s front-end performance checklist<\/a> is an excellent place to start.<\/p>\n\n 
\n (yk, il)<\/span>\n<\/div>\n<\/article>\n","protected":false},"excerpt":{"rendered":"The Fight For The Main Thread The Fight For The Main Thread Geoff Graham 2023-10-24T18:00:00+00:00 2024-06-12T20:05:40+00:00 This article is sponsored by SpeedCurve Performance work is one of those things, as<\/p>\n
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
The real fight, it seems, is evaluating our needs as developers and stakeholders against a user\u2019s needs, namely, the need for a fast and frictionless page load.<\/p>\n
Fighting For The Main Thread<\/h2>\n
We\u2019re talking about performance in the context of JavaScript, but there are lots of things that happen during a page load. The HTML is parsed. Same deal with CSS. Elements are rendered. JavaScript is loaded, and scripts are executed.<\/p>\n
All of this happens on the main thread<\/strong>. I\u2019ve heard the main thread described as a highway that gets cars from Point A to Point B; the more cars that are added to the road, the more crowded it gets and the more time it takes for cars to complete their trip. That\u2019s accurate, I think, but we can take it a little further because this particular highway has just one lane<\/em>, and it only goes in one direction<\/em>. My mind thinks of San Francisco\u2019s Lombard Street, a twisty one-way path of a tourist trap on a steep decline.<\/p>\n <\/p>\n <\/a> The main thread may not be that curvy, but you get the point: there\u2019s only one way to go, and everything that enters it must go through it.<\/p>\n JavaScript operates in much the same way. It\u2019s \u201csingle-threaded,\u201d which is how we get the one-way street comparison. I like how Brian Barbour explains it<\/a>:<\/p>\n \u201cThis means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece of code before moving on to the next. It’s synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meantime.\u201d<\/p>\n — Brian Barbour<\/p><\/blockquote>\n So, there we have it: a fight for the main thread. Each resource on a page is a contender vying for a spot on the thread and wants to run first. If one contender takes its sweet time doing its job, then the contenders behind it in line just have to wait.<\/p>\n If you\u2019re like me, I immediately reach for DevTools and open the Lighthouse tab when I need to look into a site\u2019s performance. It covers a lot of ground, like reporting stats about a page\u2019s load time that include Time to First Byte (TTFB)<\/strong>, First Contentful Paint (FCP)<\/strong>, Largest Contentful Paint (LCP)<\/strong>, Cumulative Layout Shift (CLS)<\/strong>, and so on.<\/p>\n <\/p>\n <\/a> I love this stuff! But I also am scared to death of it. I mean, this is stuff for back-end engineers, right? A measly front-end designer like me can be blissfully ignorant of all this mumbo-jumbo.<\/p>\n Meh, untrue. Like accessibility, performance is everyone\u2019s job because everyone\u2019s work contributes to it. Even the choice to use a particular CSS framework influences performance.<\/p>\n One thing I know would be more helpful than a set of Core Web Vitals scores from Lighthouse is knowing the time it takes to go from the First Contentful Paint (FCP) to the Time to Interactive (TTI), a metric known as the Total Blocking Time (TBT)<\/strong>. You can see that Lighthouse does indeed provide that metric. Let\u2019s look at it for a site that\u2019s much \u201cheavier\u201d than Smashing Magazine.<\/p>\n <\/p>\n <\/a> There we go. The problem with the Lighthouse report, though, is that I have no idea what is causing that TBT. We can get a better view if we run the same test in another service, like SpeedCurve<\/a>, which digs deeper into the metric. We can expand the metric to glean insights into what exactly is causing traffic on the main thread.<\/p>\n <\/p>\n <\/a> That\u2019s a nice big view and is a good illustration of TBT\u2019s impact on page speed. The user is forced to wait a whopping 4.1 seconds between the time the first significant piece of content loads and the time the page becomes interactive. That\u2019s a lifetime in web seconds, particularly considering that this test is based on a desktop experience on a high-speed connection.<\/p>\n One of my favorite charts in SpeedCurve is this one showing the distribution of Core Web Vitals metrics during render. You can see the delta between contentful paints and interaction!<\/p>\n <\/p>\n <\/a> What I really want to see is JavaScript, which takes more than 50ms to run. These are called long tasks<\/strong><\/a>, and they contribute the most strain on the main thread. If I scroll down further into the report, all of the long tasks are highlighted in red.<\/p>\n <\/p>\n <\/a> Another way I can evaluate scripts is by opening up the Waterfall View. The default view is helpful to see where a particular event happens in the timeline.<\/p>\n <\/p>\n <\/a> But wait! This report can be expanded to see not only what is loaded at the various points in time but whether they are blocking the thread and by how much. Most important are the assets that come before the FCP.<\/p>\n <\/p>\n <\/a> I can see right off the bat that Optimizely is serving a render-blocking script. SpeedCurve can go even deeper by distinguishing between first- and third-party scripts.<\/p>\n <\/p>\n <\/a> That way, I can see more detail about what\u2019s happening on the Optimizely side of things.<\/p>\n <\/p>\n <\/a> With that in place, SpeedCurve actually lets me track all the resources from a specific third-party source in a custom graph that offers me many more data points to evaluate. For example, I can dive into scripts that come from Optimizely with a set of custom filters to compare them with overall requests and sizes.<\/p>\n <\/p>\n <\/a> This provides a nice way to compare the impact of different third-party scripts that represent blocking and long tasks, like how much time those long tasks represent.<\/p>\n <\/p>\n <\/a> Or perhaps which of these sources are actually render-blocking:<\/p>\n <\/p>\n <\/a> These are the kinds of tools that allow us to identify bottlenecks and make a case for optimizing them or removing them altogether. SpeedCurve allows me to monitor this over time, giving me better insight into the performance of those assets.<\/p>\n There\u2019s going to be a new way to gain insights into main thread traffic when Interaction to Next Paint (INP)<\/a> is released as a new core vital metric in March 2024. It replaces the First Input Delay (FI<\/a>D<\/a>)<\/a> metric.<\/p>\n What\u2019s so important about that? Well, FID has been used to measure load responsiveness<\/strong>, which is a fancy way of saying it looks at how fast the browser loads the first user interaction on the page. And by interaction<\/em>, we mean some action the user takes that triggers an event, such as a FID might easily be overlooked when trying to diagnose long tasks on the main thread because it looks at the amount of time a user spends waiting after interacting with the page rather than the time it takes to render the page itself. It can\u2019t be replicated with lab data because it\u2019s based on a real user interaction. That said, FID is correlated to TBT in that the higher the FID, the higher the TBT, and vice versa. So, TBT is often the go-to metric for identifying long tasks because it can be measured with lab data as well as real-user monitoring (RUM).<\/p>\n But FID is wrought with limitations<\/a>, the most significant perhaps being that it\u2019s only a measure of the first<\/em> interaction. That\u2019s where INP comes into play. Instead of measuring the first interaction and only the first interaction, it measures all<\/em> interactions on a page. Jeremy Wagner has a more articulate explanation:<\/p>\n \u201cThe goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible for all or most interactions the user makes.\u201d Some interactions are naturally going to take longer to respond than others. So, we might think of FID as merely a first impression of responsiveness, whereas INP is a more complete picture. And like FID, the INP score is closely correlated with TBT but even more so, as Annie Sullivan reports:<\/p>\n <\/p>\n <\/a> Thankfully, performance tools are already beginning to bake INP into their reports. SpeedCurve is indeed one of them, and its report shows how its RUM capabilities can be used to illustrate the correlation between INP and long tasks on the main thread. This correlation chart illustrates how INP gets worse as the total long tasks\u2019 time increases.<\/p>\n <\/p>\n <\/a> What\u2019s cool about this report is that it is always collecting data, providing a way to monitor INP and its relationship to long tasks over time.<\/p>\n There is such a thing as a \u201cgood\u201d script. It\u2019s not like I\u2019m some anti-JavaScript bloke intent on getting scripts off the web. But what constitutes a \u201cgood\u201d one is nuanced.<\/p>\n Some scripts benefit the organization, and others benefit the user (or both). The challenge is balancing business needs with user needs.<\/p>\n I think web fonts are a good example that serves both needs. A font is a branding consideration as well as a design asset that can enhance the legibility of a site\u2019s content. Something like that might make loading a font script or file worth its cost to page performance. That\u2019s a tough one. So, rather than fully eliminating a font, maybe it can be optimized instead, perhaps by self-hosting the files rather than connecting to a third-party domain or only loading a subset of characters.<\/p>\n Analytics is another difficult choice. I removed analytics from my personal site long ago because I rarely, if ever, looked at them. And even if I did, the stats were more of an ego booster than insightful details that helped me improve the user experience. It\u2019s an easy decision for me, but not so easy for a site that lives and dies by reports that are used to identify and scope improvements.<\/p>\n If the script is really being used to benefit the user at the end of the day, then yeah, it\u2019s worth keeping around.<\/p>\n A script may very well serve a valid purpose and benefit both the organization and the end user. But does it need to load first before anything else? That\u2019s the sort of question to ask when a script might be useful, but can certainly jump out of line to let others run first.<\/p>\n I think of chat widgets for customer support. Yes, having a persistent and convenient way for customers to get in touch with support is going to be important, particularly for e-commerce and SaaS-based services. But does it need to be available immediately<\/em>? Probably not. You\u2019ll probably have a greater case for getting the site to a state that the user can interact with compared to getting a third-party widget up front and center. There\u2019s little point in rendering the widget if the rest of the site is inaccessible anyway. It is better to get things moving first by prioritizing some scripts ahead of others.<\/p>\n Just because a script comes from a third party doesn\u2019t mean it has to be hosted by a third party. The web fonts example from earlier applies. Can the font files be self-hosted instead rather than needing to establish another outside connection? It\u2019s worth asking. There are self-hosted alternatives to Google Analytics, after all. And even GTM can be self-hosted<\/a>! That\u2019s why grouping first and third-party scripts in SpeedCurve\u2019s reporting is so useful: spot what<\/em> is being served and where<\/em> it is coming from and identify possible opportunities.<\/p>\n <\/p>\n <\/a> Loading one script can bring unexpected visitors along for the ride. I think the classic case is a third-party script that loads its own assets, like a stylesheet. Even if you think you\u2019re only loading one stylesheet &mdahs; your own — it\u2019s very possible that a script loads additional external stylesheets, all of which need to be downloaded and rendered.<\/p>\n <\/p>\n <\/a> That\u2019s the goal! We want fewer cars on the road to alleviate traffic on the main thread. There are a bunch of technical ways to go about it. I\u2019m not here to write up a definitive guide of technical approaches for optimizing the main thread, but there is a wealth of material on the topic.<\/p>\n I\u2019ll break down several different approaches and fill them in with resources that do a great job explaining them in full.<\/p>\n A web worker, at its most basic, allows us to establish separate threads that handle tasks off the main thread. Web workers run parallel to the main thread. There are limitations to them, of course, most notably not having direct access to the DOM and being unable to share variables with other threads. But using them can be an effective way to re-route traffic from the main thread to other streets, so to speak.<\/p>\n The basic idea is to avoid bundling JavaScript as a monolithic concatenated file in favor of \u201ccode splitting\u201d or splitting the bundle up into separate, smaller payloads to send only the code that\u2019s needed. This reduces the amount of JavaScript that needs to be parsed, which improves traffic along the main thread.<\/p>\n Both are ways to load JavaScript without blocking the DOM. But they are different! Adding the I guess I could have filed this with While it sounds excellent — and it is — pre-connecting comes with an unfortunate downside in that it exposes a user\u2019s IP address to third-party resources used on the page, which is a breach of GDPR compliance<\/a>. There was a little uproar over that when it was found out that using a Google Fonts script is prone to that as well<\/a>.<\/p>\n I often think of a Yiddish proverb I first saw in Malcolm Gladwell\u2019s Outliers<\/em><\/a>; however, many years ago it came out:<\/p>\n To a worm in horseradish, the whole world is horseradish.<\/p><\/blockquote>\n It\u2019s a more pleasing and articulate version of the saying that goes, \u201cTo a carpenter, every problem looks like a nail.\u201d So, too, it is for developers working on performance. To us, every problem is code that needs a technical solution. But there are indeed ways to reduce the amount of work happening on the main thread without having to touch code directly.<\/p>\n We discussed earlier that performance is not only a developer\u2019s job; it\u2019s everyone\u2019s responsibility. So, think of these as strategies that encourage a \u201cculture\u201d of good performance in an organization.<\/p>\n As I said at the start of this article, there are some scripts on the projects I work on that I have no idea what they do. It\u2019s not because I don\u2019t care. It\u2019s because GTM makes it ridiculously easy to inject scripts on a page, and more than one person can access it across multiple teams.<\/p>\n So, maybe compile a list of all the third-party and render-blocking scripts and figure out who owns them. Is it Dave in DevOps? Marcia in Marketing? Is it someone else entirely? You gotta make friends with them. That way, there can be an honest evaluation of which scripts are actually helping and are critical to balance.<\/p>\n Or any tag manager, for that matter. Tag managers have a pretty bad reputation for adding bloat to a page. It\u2019s true; they can definitely make the page size balloon as more and more scripts are injected.<\/p>\n But that reputation is not totally warranted because, like most tools, you have to use them responsibly. Sure, the beauty of something like GTM is how easy it makes adding scripts to a page. That\u2019s the \u201cTag\u201d in Google Tag Manager. But the real<\/em> beauty is that convenience, plus<\/em> the features it provides to manage the scripts. You know, the \u201cManage\u201d in Google Tag Manager.<\/em> It\u2019s spelled out right on the tin!<\/p>\n Phew! Performance is not exactly a straightforward science. There are objective ways to measure performance, of course, but if I\u2019ve learned anything about it, it\u2019s that subjectivity is a big part of the process. Different scripts are of different sizes and consist of different resources serving different needs that have different priorities for different organizations and their users.<\/p>\n Having access to a free reporting tool like Lighthouse in DevTools is a great start for diagnosing performance issues by identifying bottlenecks on the main thread. Even better are paid tools like SpeedCurve<\/a> to dig deeper into the data for more targeted insights and to produce visual reports to help make a case for performance improvements for your team and other stakeholders.<\/p>\n While I wish there were some sort of silver bullet to guarantee good performance, I\u2019ll gladly take these and similar tools as a starting point. Most important, though, is having a performance game plan that is served by the tools. And Vitaly\u2019s front-end performance checklist<\/a> is an excellent place to start.<\/p>\n The Fight For The Main Thread The Fight For The Main Thread Geoff Graham 2023-10-24T18:00:00+00:00 2024-06-12T20:05:40+00:00 This article is sponsored by SpeedCurve Performance work is one of those things, as<\/p>\n<\/p>\n
\n <\/figcaption><\/figure>\nMonitoring The Main Thread<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\nTotal Blocking Time<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\nSpotting Long Tasks<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\nFirst & Third Party Scripts<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\nMonitoring Blocking Scripts<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\nMonitoring Interaction to Next Paint<\/h3>\n
click<\/code>,
mousedown<\/code>,
keydown<\/code>, or
pointerdown<\/code> event. FID looks at the time the user sparks an interaction and how long the browser processes — or responds to — that input.<\/p>\n
— Jeremy Wagner<\/p><\/blockquote>\n<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\nNot All Scripts Are Created Equal<\/h2>\n
Who\u2019s It Serving?<\/h3>\n
When Is It Served?<\/h3>\n
Where Is It Served From?<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\nWhat Is It Serving?<\/h3>\n
<\/p>\n
\n <\/figcaption><\/figure>\nGetting JavaScript Off The Main Thread<\/h2>\n
Use Web Workers<\/h3>\n
\n
Split JavaScript Bundles Into Individual Pieces<\/h3>\n
\n
Async or Defer Scripts<\/h3>\n
async<\/code> attribute to a
<script><\/code> tag will load the script asynchronously, executing it as soon as it\u2019s downloaded. That\u2019s different from the
defer<\/code> attribute, which is also asynchronous but waits until the DOM is fully loaded before it executes.<\/p>\n
\n
Preconnect Network Connections<\/h3>\n
async<\/code> and
defer<\/code>. That\u2019s because
preconnect<\/code> is a value on the
rel<\/code> attribute that\u2019s used on a
<link><\/code> tag. It gives the browser a hint that you plan to connect to another domain. It establishes the connection as soon as possible prior to actually downloading the resource. The connection is done in advance, allowing the full script to download later.<\/p>\n
\n
Non-Technical Approaches<\/h2>\n
Nuke Scripts That Lack Purpose<\/h3>\n
Bend Google Tag Manager To Your Will<\/h3>\n
\n
Wrapping Up<\/h2>\n
\n (yk, il)<\/span>\n<\/div>\n<\/article>\n","protected":false},"excerpt":{"rendered":"