Reporting Core Web Vitals With The Performance API<\/h1>\nGeoff Graham<\/address>\n 2024-02-27T12:00:00+00:00
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
This article is sponsored by DebugBear<\/b><\/p>\n
There\u2019s quite a buzz in the performance community with the Interaction to Next Paint (INP) metric becoming an official Core Web Vitals<\/a> (CWV) metric in a few short weeks. If you haven\u2019t heard, INP is replacing the First Input Delay (FID) metric, something you can read all about here on Smashing Magazine<\/a> as a guide to prepare for the change.<\/p>\nBut that\u2019s not what I really want to talk about. With performance at the forefront of my mind, I decided to head over to MDN for a fresh look at the Performance API<\/a>. We can use it to report the load time of elements on the page, even going so far as to report on Core Web Vitals metrics in real time. Let\u2019s look at a few ways we can use the API to report some CWV metrics.<\/p>\nBrowser Support Warning<\/h2>\n
Before we get started, a quick word about browser support. The Performance API is huge in that it contains a lot of different interfaces, properties, and methods. While the majority of it is supported by all major browsers, Chromium-based browsers are the only ones that support all of the CWV properties. The only other is Firefox, which supports the First Contentful Paint (FCP) and Largest Contentful Paint (LCP) API properties.<\/p>\n
So, we\u2019re looking at a feature of features, as it were, where some are well-established, and others are still in the experimental phase. But as far as Core Web Vitals go, we\u2019re going to want to work in Chrome for the most part as we go along.<\/p>\n
First, We Need Data Access<\/h2>\n
There are two main ways to retrieve the performance metrics we care about:<\/p>\n
\n- Using the
performance.getEntries()<\/code> method, or<\/li>\n- Using a
PerformanceObserver<\/code> instance.<\/li>\n<\/ol>\nUsing a PerformanceObserver<\/code> instance offers a few important advantages:<\/p>\n\nPerformanceObserver<\/code> observes performance metrics and dispatches them over time.<\/strong> Instead, using performance.getEntries()<\/code> will always return the entire list of entries since the performance metrics started being recorded.<\/li>\nPerformanceObserver<\/code> dispatches the metrics asynchronously,<\/strong> which means they don\u2019t have to block what the browser is doing.<\/li>\n- The
element<\/code> performance metric type doesn\u2019t work<\/strong> with the performance.getEntries()<\/code> method anyway.<\/li>\n<\/ul>\nThat all said, let\u2019s create a PerformanceObserver<\/code>:<\/p>\nconst lcpObserver = new PerformanceObserver(list => {});\n<\/code><\/pre>\nFor now, we\u2019re passing an empty callback function to the PerformanceObserver<\/code> constructor. Later on, we\u2019ll change it so that it actually does something with the observed performance metrics. For now, let\u2019s start observing:<\/p>\n\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\nThe first very important thing in that snippet is the buffered: true<\/code> property. Setting this to true<\/code> means that we not only get to observe performance metrics being dispatched after<\/em> we start observing, but we also want to get the performance metrics that were queued by the browser before<\/em> we started observing.<\/p>\nThe second very important thing to note is that we\u2019re working with the largest-contentful-paint<\/code> property. That\u2019s what\u2019s cool about the Performance API: it can be used to measure very specific things but also supports properties that are mapped directly to CWV metrics. We\u2019ll start with the LCP metric before looking at other CWV metrics.<\/p>\nReporting The Largest Contentful Paint<\/h2>\n
The largest-contentful-paint<\/code> property looks at everything on the page, identifying the biggest piece of content on the initial view and how long it takes to load. In other words, we\u2019re observing the full page load and getting stats on the largest piece of content rendered in view.<\/p>\nWe already have our Performance Observer and callback:<\/p>\n
\nconst lcpObserver = new PerformanceObserver(list => {});\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\nLet\u2019s fill in that empty callback so that it returns a list of entries once performance measurement starts:<\/p>\n
\n\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {<\/code>\n \/\/ Returns the entire list of entries<\/code>\n const entries = list.getEntries();<\/code>\n});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\nNext, we want to know which element is pegged as the LCP. It\u2019s worth noting that the element representing the LCP is always the last<\/em> element in the ordered list of entries<\/a>. So, we can look at the list of returned entries and return the last one:<\/p>\n\n\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n \/\/ Returns the entire list of entries\n const entries = list.getEntries();<\/code>\n \/\/ The element representing the LCP<\/code>\n const el = entries[entries.length - 1];<\/code>\n});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\nThe last thing is to display the results! We could create some sort of dashboard UI that consumes all the data and renders it in an aesthetically pleasing way. Let\u2019s simply log the results to the console rather than switch gears.<\/p>\n
\n\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n \/\/ Returns the entire list of entries\n const entries = list.getEntries();\n \/\/ The element representing the LCP\n const el = entries[entries.length - 1];<\/code>\n \n \/\/ Log the results in the console<\/code>\n console.log(el.element);<\/code>\n});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\nThere we go!<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n LCP support is limited to Chrome and Firefox at the time of writing. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nIt\u2019s certainly nice knowing which element is the largest. But I\u2019d like to know more about it, say, how long it took for the LCP to render:<\/p>\n
\n\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n\n const entries = list.getEntries();\n const lcp = entries[entries.length - 1];\n\n entries.forEach(entry => {\n \/\/ Log the results in the console\n console.log(\n `The LCP is:`,\n lcp.element,\n `The time to render was ${entry.startTime} milliseconds.`,\n );\n });\n});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n\n\/\/ The LCP is:\n\/\/ <h2 class=\"author-post__title mt-5 text-5xl\">\u2026<\/h2>\n\/\/ The time to render was 832.6999999880791 milliseconds.\n<\/code><\/pre>\n<\/div>\nReporting First Contentful Paint<\/h2>\n
This is all about the time it takes for the very first piece of DOM to get painted on the screen. Faster is better, of course, but the way Lighthouse reports it, a \u201cpassing\u201d score comes in between 0 and 1.8 seconds.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n Image source: Source: DebugBear<\/a>. (Large preview<\/a>)
\n <\/figcaption><\/figure>\nJust like we set the type<\/code> property to largest-contentful-paint<\/code> to fetch performance data in the last section, we\u2019re going to set a different type this time around: paint<\/code>.<\/p>\nWhen we call paint,<\/code> we tap into the PerformancePaintTiming<\/code> interface that opens up reporting on first paint<\/strong> and first contentful paint<\/strong>.<\/p>\n\n\/\/ The Performance Observer\nconst paintObserver = new PerformanceObserver(list => {\n const entries = list.getEntries();\n entries.forEach(entry => { \n \/\/ Log the results in the console.\n console.log(\n `The time to ${entry.name} took ${entry.startTime} milliseconds.`,\n );\n });\n});\n\n\/\/ Call the Observer.\npaintObserver.observe({ type: \"paint\", buffered: true });\n\n\/\/ The time to first-paint took 509.29999999981374 milliseconds.\n\/\/ The time to first-contentful-paint took 509.29999999981374 milliseconds.\n<\/code><\/pre>\n<\/div>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nNotice how paint<\/code> spits out two results: one for the first-paint<\/code> and the other for the first-contenful-paint<\/code>. I know that a lot happens between the time a user navigates to a page and stuff starts painting, but I didn\u2019t know there was a difference between these two metrics.<\/p>\nHere\u2019s how the spec<\/a> explains it:<\/p>\n\u201cThe primary difference between the two metrics is that [First Paint] marks the first time the browser renders anything for a given document. By contrast, [First Contentful Paint] marks the time when the browser renders the first bit of image or text content from the DOM.\u201d<\/p><\/blockquote>\n
As it turns out, the first paint and FCP data I got back in that last example are identical. Since first paint can be anything<\/em> that prevents a blank screen<\/a>, e.g., a background color, I think that the identical results mean that whatever content is first painted to the screen just so happens to also be the first contentful paint.<\/p>\nBut there\u2019s apparently a lot more nuance to it, as Chrome measures FCP differently based on what version of the browser is in use. Google keeps a full record of the changelog<\/a> for reference, so that\u2019s something to keep in mind when evaluating results, especially if you find yourself with different results from others on your team.<\/p>\nReporting Cumulative Layout Shift<\/h2>\n
How much does the page shift around as elements are painted to it? Of course, we can get that from the Performance API! Instead of largest-contentful-paint<\/code> or paint<\/code>, now we\u2019re turning to the layout-shift<\/code> type.<\/p>\nThis is where browser support is dicier than other performance metrics. The LayoutShift<\/code> interface is still in \u201cexperimental\u201d status at this time, with Chromium browsers being the sole group of supporters<\/a>.<\/p>\nAs it currently stands, LayoutShift<\/code> opens up several pieces of information, including a value<\/code> representing the amount of shifting, as well as the sources<\/code> causing it to happen. More than that, we can tell if any user interactions took place that would affect the CLS value, such as zooming, changing browser size, or actions like keydown<\/code>, pointerdown<\/code>, and mousedown<\/code>. This is the lastInputTime<\/code> property<\/a>, and there\u2019s an accompanying hasRecentInput<\/code> boolean<\/a> that returns true<\/code> if the lastInputTime<\/code> is less than 500ms<\/code>.<\/p>\nGot all that? We can use this to both see how much shifting takes place during page load and identify the culprits while excluding any shifts that are the result of user interactions.<\/p>\n
\nconst observer = new PerformanceObserver((list) => {\n let cumulativeLayoutShift = 0;\n list.getEntries().forEach((entry) => {\n \/\/ Don't count if the layout shift is a result of user interaction.\n if (!entry.hadRecentInput) {\n cumulativeLayoutShift += entry.value;\n }\n console.log({ entry, cumulativeLayoutShift });\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"layout-shift\", buffered: true });\n<\/code><\/pre>\n<\/div>\nGiven the experimental nature of this one, here\u2019s what an entry<\/code> object looks like when we query it:<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nPretty handy, right? Not only are we able to see how much shifting takes place (0.128<\/code>) and which element is moving around (article.a.main<\/code>), but we have the exact coordinates of the element\u2019s box from where it starts to where it ends.<\/p>\nReporting Interaction To Next Paint<\/h2>\n
This is the new kid on the block that got my mind wondering about the Performance API in the first place. It\u2019s been possible for some time now to measure INP as it transitions to replace First Input Delay as a Core Web Vitals metric in March 2024. When we\u2019re talking about INP, we\u2019re talking about measuring the time between a user interacting with the page and the page responding to that interaction.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nWe need to hook into the PerformanceEventTiming<\/code> class<\/a> for this one. And there\u2019s so much we can dig into when it comes to user interactions. Think about it! There\u2019s what type of event happened (entryType<\/code> and name<\/code>), when it happened (startTime<\/code>), what element triggered the interaction (interactionId<\/code>, experimental), and when processing the interaction starts (processingStart<\/code>) and ends (processingEnd<\/code>). There\u2019s also a way to exclude interactions that can be canceled by the user (cancelable<\/code>).<\/p>\nconst observer = new PerformanceObserver((list) => {\n list.getEntries().forEach((entry) => {\n \/\/ Alias for the total duration.\n const duration = entry.duration;\n \/\/ Calculate the time before processing starts.\n const delay = entry.processingStart - entry.startTime;\n \/\/ Calculate the time to process the interaction.\n const lag = entry.processingStart - entry.startTime;\n\n \/\/ Don't count interactions that the user can cancel.\n if (!entry.cancelable) {\n console.log(`INP Duration: ${duration}`);\n console.log(`INP Delay: ${delay}`);\n console.log(`Event handler duration: ${lag}`);\n }\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"event\", buffered: true });\n<\/code><\/pre>\nReporting Long Animation Frames (LoAFs)<\/h2>\n
Let\u2019s build off that last one. We can now track INP scores on our website and break them down into specific components. But what code is actually running and causing those delays?<\/p>\n
The Long Animation Frames API<\/a> was developed to help answer that question. It won\u2019t land in Chrome stable until mid-March 2024, but you can already use it in Chrome Canary.<\/p>\nA long-animation-frame<\/code> entry is reported every time the browser couldn\u2019t render page content immediately as it was busy with other processing tasks. We get an overall duration<\/code> for the long frame but also a duration<\/code> for different scripts<\/code> involved in the processing.<\/p>\n\nconst observer = new PerformanceObserver((list) => {\n list.getEntries().forEach((entry) => {\n if (entry.duration > 50) {\n \/\/ Log the overall duration of the long frame.\n console.log(`Frame took ${entry.duration} ms`)\n console.log(`Contributing scripts:`)\n \/\/ Log information on each script in a table.\n entry.scripts.forEach(script => {\n console.table({\n \/\/ URL of the script where the processing starts\n sourceURL: script.sourceURL,\n \/\/ Total time spent on this sub-task\n duration: script.duration,\n \/\/ Name of the handler function\n functionName: script.sourceFunctionName,\n \/\/ Why was the handler function called? For example, \n \/\/ a user interaction or a fetch response arriving.\n invoker: script.invoker\n })\n })\n }\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"long-animation-frame\", buffered: true });\n<\/code><\/pre>\n<\/div>\nWhen an INP interaction takes place, we can find the closest long animation frame and investigate what processing delayed the page response.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThere\u2019s A Package For This<\/h2>\n
The Performance API is so big and so powerful. We could easily spend an entire bootcamp learning all of the interfaces and what they provide. There\u2019s network timing, navigation timing, resource timing, and plenty of custom reporting features available on top of the Core Web Vitals we\u2019ve looked at.<\/p>\n
If CWVs are what you\u2019re really after, then you might consider looking into the web-vitals library<\/a> to wrap around the browser Performance APIs.<\/p>\nNeed a CWV metric? All it takes is a single function.<\/p>\n
webVitals.getINP(function(info) {\n console.log(info)\n}, { reportAllChanges: true });\n<\/code><\/pre>\nBoom! That reportAllChanges<\/code> property? That\u2019s a way of saying we only want to report data every time the metric changes instead of only when the metric reaches its final value. For example, as long as the page is open, there\u2019s always a chance that the user will encounter an even slower interaction than the current INP interaction. So, without reportAllChanges<\/code>, we\u2019d only see the INP reported when the page is closed (or when it\u2019s hidden, e.g., if the user switches to a different browser tab).<\/p>\nWe can also report purely on the difference between the preliminary results and the resulting changes. From the web-vitals docs<\/a>:<\/p>\nfunction logDelta({ name, id, delta }) {\n console.log(`${name} matching ID ${id} changed by ${delta}`);\n}\n\nonCLS(logDelta);\nonINP(logDelta);\nonLCP(logDelta);\n<\/code><\/pre>\nMeasuring Is Fun, But Monitoring Is Better<\/h2>\n
All we\u2019ve done here is scratch the surface of the Performance API as far as programmatically reporting Core Web Vitals metrics. It\u2019s fun to play with things like this. There\u2019s even a slight feeling of power<\/em> in being able to tap into this information on demand.<\/p>\nAt the end of the day, though, you\u2019re probably just as interested in monitoring<\/em> performance as you are in measuring<\/em> it<\/a>. We could do a deep dive and detail what a performance dashboard powered by the Performance API is like, complete with historical records that indicate changes over time. That\u2019s ultimately the sort of thing we can build on this — we can build our own real user monitoring (RUM) tool or perhaps compare Performance API values against historical data from the Chrome User Experience Report<\/a> (CrUX)<\/a>.<\/p>\nOr perhaps you want a solution right now without stitching things together. That\u2019s what you\u2019ll get from a paid commercial service like DebugBear<\/a>. All of this is already baked right in with all the metrics, historical data, and charts you need to gain insights into the overall performance of a site over time\u2026 and in real-time, monitoring real users<\/a>.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nDebugBear can help you identify why users are having slow experiences on any given page. If there is slow INP, what page elements are these users interacting with? What elements often shift around on the page and cause high CLS? Is the LCP typically an image, a heading, or something else? And does the type of LCP element impact the LCP score?<\/p>\n
To help explain INP scores, DebugBear also supports the upcoming Long Animation Frames API we looked at, allowing you to see what code is responsible for interaction delays.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\nThe Performance API can also report a list of all resource requests on a page. DebugBear uses this information to show a request waterfall chart<\/a> that tells you not just when different resources are loaded but also whether the resources were render-blocking, loaded from the cache or whether an image resource is used for the LCP element.<\/p>\nIn this screenshot, the blue line shows the FCP, and the red line shows the LCP. We can see that the LCP happens right after the LCP image request, marked by the blue \u201cLCP\u201d badge, has finished.<\/p>\n<\/p>\n <\/p>\n
<\/p>\n
<\/a>\n (Large preview<\/a>)
\n <\/figcaption><\/figure>\n
\n 2024-06-12T20:05:40+00:00
\n <\/header>\n
But that\u2019s not what I really want to talk about. With performance at the forefront of my mind, I decided to head over to MDN for a fresh look at the Performance API<\/a>. We can use it to report the load time of elements on the page, even going so far as to report on Core Web Vitals metrics in real time. Let\u2019s look at a few ways we can use the API to report some CWV metrics.<\/p>\n Before we get started, a quick word about browser support. The Performance API is huge in that it contains a lot of different interfaces, properties, and methods. While the majority of it is supported by all major browsers, Chromium-based browsers are the only ones that support all of the CWV properties. The only other is Firefox, which supports the First Contentful Paint (FCP) and Largest Contentful Paint (LCP) API properties.<\/p>\n So, we\u2019re looking at a feature of features, as it were, where some are well-established, and others are still in the experimental phase. But as far as Core Web Vitals go, we\u2019re going to want to work in Chrome for the most part as we go along.<\/p>\n There are two main ways to retrieve the performance metrics we care about:<\/p>\n Using a That all said, let\u2019s create a For now, we\u2019re passing an empty callback function to the The first very important thing in that snippet is the The second very important thing to note is that we\u2019re working with the The We already have our Performance Observer and callback:<\/p>\n Let\u2019s fill in that empty callback so that it returns a list of entries once performance measurement starts:<\/p>\n Next, we want to know which element is pegged as the LCP. It\u2019s worth noting that the element representing the LCP is always the last<\/em> element in the ordered list of entries<\/a>. So, we can look at the list of returned entries and return the last one:<\/p>\n The last thing is to display the results! We could create some sort of dashboard UI that consumes all the data and renders it in an aesthetically pleasing way. Let\u2019s simply log the results to the console rather than switch gears.<\/p>\n There we go!<\/p>\n <\/p>\n <\/a> It\u2019s certainly nice knowing which element is the largest. But I\u2019d like to know more about it, say, how long it took for the LCP to render:<\/p>\n This is all about the time it takes for the very first piece of DOM to get painted on the screen. Faster is better, of course, but the way Lighthouse reports it, a \u201cpassing\u201d score comes in between 0 and 1.8 seconds.<\/p>\n <\/p>\n <\/a> Just like we set the When we call <\/p>\n <\/a> Notice how Here\u2019s how the spec<\/a> explains it:<\/p>\n \u201cThe primary difference between the two metrics is that [First Paint] marks the first time the browser renders anything for a given document. By contrast, [First Contentful Paint] marks the time when the browser renders the first bit of image or text content from the DOM.\u201d<\/p><\/blockquote>\n As it turns out, the first paint and FCP data I got back in that last example are identical. Since first paint can be anything<\/em> that prevents a blank screen<\/a>, e.g., a background color, I think that the identical results mean that whatever content is first painted to the screen just so happens to also be the first contentful paint.<\/p>\n But there\u2019s apparently a lot more nuance to it, as Chrome measures FCP differently based on what version of the browser is in use. Google keeps a full record of the changelog<\/a> for reference, so that\u2019s something to keep in mind when evaluating results, especially if you find yourself with different results from others on your team.<\/p>\n How much does the page shift around as elements are painted to it? Of course, we can get that from the Performance API! Instead of This is where browser support is dicier than other performance metrics. The As it currently stands, Got all that? We can use this to both see how much shifting takes place during page load and identify the culprits while excluding any shifts that are the result of user interactions.<\/p>\n Given the experimental nature of this one, here\u2019s what an <\/p>\n <\/a> Pretty handy, right? Not only are we able to see how much shifting takes place ( This is the new kid on the block that got my mind wondering about the Performance API in the first place. It\u2019s been possible for some time now to measure INP as it transitions to replace First Input Delay as a Core Web Vitals metric in March 2024. When we\u2019re talking about INP, we\u2019re talking about measuring the time between a user interacting with the page and the page responding to that interaction.<\/p>\n <\/p>\n <\/a> We need to hook into the Let\u2019s build off that last one. We can now track INP scores on our website and break them down into specific components. But what code is actually running and causing those delays?<\/p>\n The Long Animation Frames API<\/a> was developed to help answer that question. It won\u2019t land in Chrome stable until mid-March 2024, but you can already use it in Chrome Canary.<\/p>\n A When an INP interaction takes place, we can find the closest long animation frame and investigate what processing delayed the page response.<\/p>\n <\/p>\n <\/a> The Performance API is so big and so powerful. We could easily spend an entire bootcamp learning all of the interfaces and what they provide. There\u2019s network timing, navigation timing, resource timing, and plenty of custom reporting features available on top of the Core Web Vitals we\u2019ve looked at.<\/p>\n If CWVs are what you\u2019re really after, then you might consider looking into the web-vitals library<\/a> to wrap around the browser Performance APIs.<\/p>\n Need a CWV metric? All it takes is a single function.<\/p>\n Boom! That We can also report purely on the difference between the preliminary results and the resulting changes. From the web-vitals docs<\/a>:<\/p>\n All we\u2019ve done here is scratch the surface of the Performance API as far as programmatically reporting Core Web Vitals metrics. It\u2019s fun to play with things like this. There\u2019s even a slight feeling of power<\/em> in being able to tap into this information on demand.<\/p>\n At the end of the day, though, you\u2019re probably just as interested in monitoring<\/em> performance as you are in measuring<\/em> it<\/a>. We could do a deep dive and detail what a performance dashboard powered by the Performance API is like, complete with historical records that indicate changes over time. That\u2019s ultimately the sort of thing we can build on this — we can build our own real user monitoring (RUM) tool or perhaps compare Performance API values against historical data from the Chrome User Experience Report<\/a> (CrUX)<\/a>.<\/p>\n Or perhaps you want a solution right now without stitching things together. That\u2019s what you\u2019ll get from a paid commercial service like DebugBear<\/a>. All of this is already baked right in with all the metrics, historical data, and charts you need to gain insights into the overall performance of a site over time\u2026 and in real-time, monitoring real users<\/a>.<\/p>\n <\/p>\n <\/a> DebugBear can help you identify why users are having slow experiences on any given page. If there is slow INP, what page elements are these users interacting with? What elements often shift around on the page and cause high CLS? Is the LCP typically an image, a heading, or something else? And does the type of LCP element impact the LCP score?<\/p>\n To help explain INP scores, DebugBear also supports the upcoming Long Animation Frames API we looked at, allowing you to see what code is responsible for interaction delays.<\/p>\n <\/p>\n <\/a> The Performance API can also report a list of all resource requests on a page. DebugBear uses this information to show a request waterfall chart<\/a> that tells you not just when different resources are loaded but also whether the resources were render-blocking, loaded from the cache or whether an image resource is used for the LCP element.<\/p>\n In this screenshot, the blue line shows the FCP, and the red line shows the LCP. We can see that the LCP happens right after the LCP image request, marked by the blue \u201cLCP\u201d badge, has finished.<\/p>\n <\/p>\n <\/a>Browser Support Warning<\/h2>\n
First, We Need Data Access<\/h2>\n
\n
performance.getEntries()<\/code> method, or<\/li>\n
PerformanceObserver<\/code> instance.<\/li>\n<\/ol>\n
PerformanceObserver<\/code> instance offers a few important advantages:<\/p>\n
\n
PerformanceObserver<\/code> observes performance metrics and dispatches them over time.<\/strong> Instead, using
performance.getEntries()<\/code> will always return the entire list of entries since the performance metrics started being recorded.<\/li>\n
PerformanceObserver<\/code> dispatches the metrics asynchronously,<\/strong> which means they don\u2019t have to block what the browser is doing.<\/li>\n
element<\/code> performance metric type doesn\u2019t work<\/strong> with the
performance.getEntries()<\/code> method anyway.<\/li>\n<\/ul>\n
PerformanceObserver<\/code>:<\/p>\n
const lcpObserver = new PerformanceObserver(list => {});\n<\/code><\/pre>\n
PerformanceObserver<\/code> constructor. Later on, we\u2019ll change it so that it actually does something with the observed performance metrics. For now, let\u2019s start observing:<\/p>\n
lcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
buffered: true<\/code> property. Setting this to
true<\/code> means that we not only get to observe performance metrics being dispatched after<\/em> we start observing, but we also want to get the performance metrics that were queued by the browser before<\/em> we started observing.<\/p>\n
largest-contentful-paint<\/code> property. That\u2019s what\u2019s cool about the Performance API: it can be used to measure very specific things but also supports properties that are mapped directly to CWV metrics. We\u2019ll start with the LCP metric before looking at other CWV metrics.<\/p>\n
Reporting The Largest Contentful Paint<\/h2>\n
largest-contentful-paint<\/code> property looks at everything on the page, identifying the biggest piece of content on the initial view and how long it takes to load. In other words, we\u2019re observing the full page load and getting stats on the largest piece of content rendered in view.<\/p>\n
const lcpObserver = new PerformanceObserver(list => {});\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {<\/code>\n
\/\/ Returns the entire list of entries<\/code>\n
const entries = list.getEntries();<\/code>\n
});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n \/\/ Returns the entire list of entries\n const entries = list.getEntries();<\/code>\n
\/\/ The element representing the LCP<\/code>\n
const el = entries[entries.length - 1];<\/code>\n
});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n \/\/ Returns the entire list of entries\n const entries = list.getEntries();\n \/\/ The element representing the LCP\n const el = entries[entries.length - 1];<\/code>\n \n
\/\/ Log the results in the console<\/code>\n
console.log(el.element);<\/code>\n
});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
<\/p>\n
\n <\/figcaption><\/figure>\n\/\/ The Performance Observer\nconst lcpObserver = new PerformanceObserver(list => {\n\n const entries = list.getEntries();\n const lcp = entries[entries.length - 1];\n\n entries.forEach(entry => {\n \/\/ Log the results in the console\n console.log(\n `The LCP is:`,\n lcp.element,\n `The time to render was ${entry.startTime} milliseconds.`,\n );\n });\n});\n\n\/\/ Call the Observer\nlcpObserver.observe({ type: \"largest-contentful-paint\", buffered: true });\n\n\/\/ The LCP is:\n\/\/ <h2 class=\"author-post__title mt-5 text-5xl\">\u2026<\/h2>\n\/\/ The time to render was 832.6999999880791 milliseconds.\n<\/code><\/pre>\n<\/div>\n
Reporting First Contentful Paint<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\ntype<\/code> property to
largest-contentful-paint<\/code> to fetch performance data in the last section, we\u2019re going to set a different type this time around:
paint<\/code>.<\/p>\n
paint,<\/code> we tap into the
PerformancePaintTiming<\/code> interface that opens up reporting on first paint<\/strong> and first contentful paint<\/strong>.<\/p>\n
\/\/ The Performance Observer\nconst paintObserver = new PerformanceObserver(list => {\n const entries = list.getEntries();\n entries.forEach(entry => { \n \/\/ Log the results in the console.\n console.log(\n `The time to ${entry.name} took ${entry.startTime} milliseconds.`,\n );\n });\n});\n\n\/\/ Call the Observer.\npaintObserver.observe({ type: \"paint\", buffered: true });\n\n\/\/ The time to first-paint took 509.29999999981374 milliseconds.\n\/\/ The time to first-contentful-paint took 509.29999999981374 milliseconds.\n<\/code><\/pre>\n<\/div>\n
<\/p>\n
\n <\/figcaption><\/figure>\npaint<\/code> spits out two results: one for the
first-paint<\/code> and the other for the
first-contenful-paint<\/code>. I know that a lot happens between the time a user navigates to a page and stuff starts painting, but I didn\u2019t know there was a difference between these two metrics.<\/p>\n
Reporting Cumulative Layout Shift<\/h2>\n
largest-contentful-paint<\/code> or
paint<\/code>, now we\u2019re turning to the
layout-shift<\/code> type.<\/p>\n
LayoutShift<\/code> interface is still in \u201cexperimental\u201d status at this time, with Chromium browsers being the sole group of supporters<\/a>.<\/p>\n
LayoutShift<\/code> opens up several pieces of information, including a
value<\/code> representing the amount of shifting, as well as the
sources<\/code> causing it to happen. More than that, we can tell if any user interactions took place that would affect the CLS value, such as zooming, changing browser size, or actions like
keydown<\/code>,
pointerdown<\/code>, and
mousedown<\/code>. This is the
lastInputTime<\/code> property<\/a>, and there\u2019s an accompanying
hasRecentInput<\/code> boolean<\/a> that returns
true<\/code> if the
lastInputTime<\/code> is less than
500ms<\/code>.<\/p>\n
const observer = new PerformanceObserver((list) => {\n let cumulativeLayoutShift = 0;\n list.getEntries().forEach((entry) => {\n \/\/ Don't count if the layout shift is a result of user interaction.\n if (!entry.hadRecentInput) {\n cumulativeLayoutShift += entry.value;\n }\n console.log({ entry, cumulativeLayoutShift });\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"layout-shift\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
entry<\/code> object looks like when we query it:<\/p>\n
<\/p>\n
\n <\/figcaption><\/figure>\n0.128<\/code>) and which element is moving around (
article.a.main<\/code>), but we have the exact coordinates of the element\u2019s box from where it starts to where it ends.<\/p>\n
Reporting Interaction To Next Paint<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\nPerformanceEventTiming<\/code> class<\/a> for this one. And there\u2019s so much we can dig into when it comes to user interactions. Think about it! There\u2019s what type of event happened (
entryType<\/code> and
name<\/code>), when it happened (
startTime<\/code>), what element triggered the interaction (
interactionId<\/code>, experimental), and when processing the interaction starts (
processingStart<\/code>) and ends (
processingEnd<\/code>). There\u2019s also a way to exclude interactions that can be canceled by the user (
cancelable<\/code>).<\/p>\n
const observer = new PerformanceObserver((list) => {\n list.getEntries().forEach((entry) => {\n \/\/ Alias for the total duration.\n const duration = entry.duration;\n \/\/ Calculate the time before processing starts.\n const delay = entry.processingStart - entry.startTime;\n \/\/ Calculate the time to process the interaction.\n const lag = entry.processingStart - entry.startTime;\n\n \/\/ Don't count interactions that the user can cancel.\n if (!entry.cancelable) {\n console.log(`INP Duration: ${duration}`);\n console.log(`INP Delay: ${delay}`);\n console.log(`Event handler duration: ${lag}`);\n }\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"event\", buffered: true });\n<\/code><\/pre>\n
Reporting Long Animation Frames (LoAFs)<\/h2>\n
long-animation-frame<\/code> entry is reported every time the browser couldn\u2019t render page content immediately as it was busy with other processing tasks. We get an overall
duration<\/code> for the long frame but also a
duration<\/code> for different
scripts<\/code> involved in the processing.<\/p>\n
const observer = new PerformanceObserver((list) => {\n list.getEntries().forEach((entry) => {\n if (entry.duration > 50) {\n \/\/ Log the overall duration of the long frame.\n console.log(`Frame took ${entry.duration} ms`)\n console.log(`Contributing scripts:`)\n \/\/ Log information on each script in a table.\n entry.scripts.forEach(script => {\n console.table({\n \/\/ URL of the script where the processing starts\n sourceURL: script.sourceURL,\n \/\/ Total time spent on this sub-task\n duration: script.duration,\n \/\/ Name of the handler function\n functionName: script.sourceFunctionName,\n \/\/ Why was the handler function called? For example, \n \/\/ a user interaction or a fetch response arriving.\n invoker: script.invoker\n })\n })\n }\n });\n});\n\n\/\/ Call the Observer.\nobserver.observe({ type: \"long-animation-frame\", buffered: true });\n<\/code><\/pre>\n<\/div>\n
<\/p>\n
\n <\/figcaption><\/figure>\nThere\u2019s A Package For This<\/h2>\n
webVitals.getINP(function(info) {\n console.log(info)\n}, { reportAllChanges: true });\n<\/code><\/pre>\n
reportAllChanges<\/code> property? That\u2019s a way of saying we only want to report data every time the metric changes instead of only when the metric reaches its final value. For example, as long as the page is open, there\u2019s always a chance that the user will encounter an even slower interaction than the current INP interaction. So, without
reportAllChanges<\/code>, we\u2019d only see the INP reported when the page is closed (or when it\u2019s hidden, e.g., if the user switches to a different browser tab).<\/p>\n
function logDelta({ name, id, delta }) {\n console.log(`${name} matching ID ${id} changed by ${delta}`);\n}\n\nonCLS(logDelta);\nonINP(logDelta);\nonLCP(logDelta);\n<\/code><\/pre>\n
Measuring Is Fun, But Monitoring Is Better<\/h2>\n
<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\n<\/p>\n
\n <\/figcaption><\/figure>\n