Menu
About me Kontakt

Loading External JS Scripts and the Impact of This Process on Website Performance

Jamstack TV discusses the challenges related to JavaScript performance in modern web applications. While JavaScript isn't the heaviest resource on the web, it offers plenty of ways to wreck a site's performance. Although images and videos may be heavier than scripts, JavaScript can block HTML parsing, causing delays in rendering the page. During this talk, various methods for loading scripts were examined to improve overall website performance. Scripts can be included internally in the HTML or as external files, but both options may not be optimal. It's important to understand that scripts can compete with other resources for network bandwidth availability. The ultimate goal is to optimize by minimizing the impact of scripts on performance. Jamstack TV emphasizes that practice is key to achieving the best results. Users should experiment with different script loading methods and carefully measure their impact on performance for the best outcomes. At the time of writing this article, the video had 2476 views and 85 likes, indicating its appeal among viewers.

Toggle timeline summary

  • 00:00 Introduction and acknowledgment to Phil.
  • 00:14 Discussion about being a performance advocate and concerns about JavaScript.
  • 00:19 Performance issues with JavaScript.
  • 00:25 Comparison of file sizes: JavaScript, images, video, and audio.
  • 00:34 JavaScript's ability to hinder browser performance.
  • 00:44 JavaScript impacts user interaction and rendering.
  • 00:57 JavaScript memory leaks in modern web architecture.
  • 01:04 Overview of performance considerations in JavaScript.
  • 01:11 Focusing on methods of loading scripts effectively.
  • 01:26 Basic methods for including JavaScript on a web page.
  • 01:45 Identifying key performance issues with JavaScript.
  • 02:03 Highlighting the impact of script loading on network prioritization.
  • 02:16 Introduction to analyzing a specific web page to illustrate points.
  • 02:27 Description of a specific example page and resources being loaded.
  • 03:10 Explanation of the Slow Files service used for demonstration.
  • 03:51 Analyzing script loading and its effect on performance using Chrome DevTools.
  • 04:14 Understanding network priority and its implications.
  • 04:28 Impact of JavaScript loading on browser rendering.
  • 10:32 Analysis of placing scripts at the bottom of the HTML for better performance.
  • 12:15 Exploring dynamic script injection and its benefits.
  • 19:34 Introducing the defer attribute for scripts.
  • 21:46 Combining preload and defer for script management.
  • 22:30 Conclusion: Trade-offs in script loading methods.
  • 23:48 Final thoughts and advice on performance optimization.

Transcription

Thanks, Phil. So yeah, as Phil said, kind of a bit of a performance nut. And if there's one thing we performance advocates love to complain about, it's JavaScript. But it's not without good reason. The thing is, JavaScript isn't the heaviest resource on the web. Images are typically heavier. And when they're used, so are video and audio. But none of those resources are quite as capable of wrecking havoc on performance in the same variety of ways that JavaScript is. JavaScript can block the browser from parsing HTML and rendering the page. It can keep the browser's main thread so occupied that it can't respond to user interaction or do important layout or paint work. Add to that the fact that memory leaks are now starting to run rampant all over our new single-page architecture-driven web, and it's clear that JavaScript has no shortage of ways to mess with our page's performance. Now, you could literally spend days, quite literally days, digging into all the different performance considerations about how to use JavaScript effectively and efficiently. But today, I wanted to zero in on one that, at least at first blush, sounds very, very simple. How to load a script. Now, before you tune out entirely, I promise you that there's more going on. It's more interesting than it sounds. So at its most basic form, you include a script through one of two methods. You do it either as an internal script inlined on the page or an external script using the source attribute. And more or less, this gets the job done. But it doesn't do it particularly well. I mentioned that there's a large variety of ways that JavaScript can mess up performance. But for now, throughout this talk, I want to focus on two basic ones. How JavaScript ends up competing with other resources on the network for bandwidth availability, and what it does in terms of impacting the browser's parser and eventually, ultimately, how we render the page. Our goal should be to use no more higher priority on the network than needed and to not delay the display of the page in any way, shape, or form. That default script element, as innocent as it looks, kind of messes up on both of them. So to show you what I mean, I wanted to take a look at just a completely random page from the internet and just roll the die and see what comes up. And I thought this one might work. I've actually seen him do the moves live. It's even better when it's live. So thank you to Jason for building this and letting me use this for the talk, actually. But I wanted to show you what's in the document itself. So it's not particularly complex. We have inside the head, we're going to be loading two different resources. We have a style sheet that we're pulling in. And then we have a JavaScript file coming in from a service called Slow Files, which I'll explain in a second. In the body, there's an image of Phil doing his dance. There's another script element that pulls in a YouTube script. And then there's an inline script that Jason was using for the pause and the play of this dynamic inclusion of background music. So I mentioned that Slow Files service. Slow Files, if you're not familiar, it's a fantastic website built by two of my friends, Ryan Townsend and Harry Roberts. What it does is it lets you generate either a JavaScript request or a CSS request with some sort of an artificial delay imposed. That's all it does, and it does it very, very well. So the idea being that in this example here, this delay of 2,500 milliseconds or 2.5 seconds, I can now drop this into a page and experiment with different ways to load scripts or load CSS or structure the page and see how that impacts the performance. So throughout the talk, we're going to be using this. We're going to be using a script from Slow Files that has a 2.5 second delay attached to it. So let's start here. Let's start with this basic script element. If we load this up inside of Chrome DevTools and we look at the network panel, we'll see that our script is requested. It's the third request there. Under the time column, we can see it takes 2.67 seconds. That's that nice artificial delay being imposed so we can exaggerate the impact and see what's going on. But most notably, I want to zero in on this priority column. So you'll see that the script has a high priority. Now, browsers have, again, a lot of different resources that they need to load. And they need to be able to figure out which of these are most important in order to get the page to progress and load efficiently. And the way they do this is they give some level of priority to these resources. They have different resources get different network priorities. And the higher the priority, the higher up in the loading queue that that resource gets placed. So here we can see Chrome is giving a high priority to the script, which puts it just a half step behind the document and the CSS, which both get the highest priority. Now, this isn't unique to Chrome. Every browser has priorities. They use sometimes different verbiage. And sometimes the priorities certainly change in terms of what the browsers are doing. So for example, in Safari, if we look inside of DevTools, we can see our script gets a high network priority again. In this case, it actually puts us right on even footing with the CSS and the HTML. And in Firefox, you can't get it directly in the DevTools. But if you run the Firefox profiler, you can find out that we're getting a normal priority. Theirs is mostly about word change. They do have a highest priority that they assign to the document. But for all intents and purposes, you can consider their normal to be roughly equivalent to the high that we're seeing in Safari and the BlinkBase browsers. So again, the words are different. But in each case, JavaScript gets a significantly higher priority than many other types of resources, which means that if it's up head to head against that in a low-priority resource, JavaScript's going to win every time. So why is it so high in the pecking order? Well, it's because JavaScript is capable of doing things like document.write. The document object model is foundational. It is absolutely critical to get that done correctly and efficiently for the rest of the page to progress. Without the document object model, this representation of all the content on a page and the structure that goes with it, there's nothing for the CSS to apply to. There's nothing for JavaScript to manipulate. You need to get this right, and you need to get it right early. JavaScript, because it can mess with this kind of stuff, it could cause the browser to potentially have to undo all the work it's done and recreate the document object model. So instead of having to go through that expensive process of constructing, parsing the HTML, constructing the DOM, and then potentially throwing it out, the browsers treat JavaScript as parser blocking. When they come across a script, by default, they will pause parsing the HTML until that script has been downloaded and executed, just in case it's going to do something nefarious. So Chrome DevTools does a pretty good job of visualizing this. So what you see here is this is the performance panel. There's a lot going on, but we'll break it down step by step. The first thing that we want to zero in on here is in the network section of the panel. We can see, again, the request. That's that 2.67 second request, that high priority for our JavaScript file. What we want to do is we want to see, what was the browser doing when it found that request? How did it queue that off? So we're going to zoom in and go to the very beginning of that request, to where it all started. And then we're going to come down to the main thread activity. Now, the main thread is where user interaction, painting, calculation of styles, all that kind of stuff happens on the main thread. And what we can see here is that right when that script gets identified, the main thread, the browser, is parsing the HTML. So that 0 dot dot dot 6 in the brackets, that's telling us it was parsing from line 0 to line 6. So the browser parses the first six lines of HTML, and then it stops. And it stops because it found this script, and this script can potentially mess with everything. So it's going to sit there and wait until the script has been downloaded. So now let's move to the other end. We're going to go back to when this request has been completed, find the end of that, and then come down to the main thread again and see what's going on once that script arrives. So here what we can see is that we start parsing the HTML again, this time at line 7. So for that entire 2.7 seconds or so that that file is being downloaded, nothing happens. We haven't downloaded, or we haven't parsed anything other than those first six lines of the document. And the impact on the user experience is, as you would expect, not exactly great. So Chrome records these film strips, these little screenshots as the page is progressing. And we can see for the first three seconds or so of the page load process, we're staring at a white screen. It isn't until around three seconds or so that we finally get to see Phil dance. And if we go down to the timings, the timings confirm that first paint is fired off pretty late. Until then, it's a blank screen. Now for anybody who's ever wanted to see Phil dance, waiting three seconds isn't going to cut it. When we say dance, you dance, Phil. So three seconds is not tolerable. What we're doing from a user experience perspective here is not great. So there are these undesirable characteristics that come along with the default script element. It blocks the parser, it blocks the rendering of the page, and it competes with critical resources, because it has such a high network priority. If we're building our pages in a way that we're not dependent on JavaScript for critical content to display, which is the goal, then that network priority is not necessarily something we want. We want to get those other assets out faster. So let's try to improve on this. So the first thing we might do, our take two, so to speak, is to try and put those scripts at the bottom of the HTML. Now this is something that has been said for years now, as a performance best practice, like put the script at the bottom instead of in the head. So why does that matter? Well, remember, when we're talking about parsing and blocking the parser, the browser's able to parse the HTML up until the point it finds that script element. If we have it at the bottom of the page, that means it can parse much more HTML. So if we look at the performance panel again for this, with our script now at the bottom, what you'll see here is if you look at the film strip, we're getting that hawk dance out really early. The first paint fires very quickly, long before our JavaScript arrives. And if we go to the end of this request, and zero in again on the main thread activity, we'll see that once that request is completed and it starts parsing HTML, it's starting on line 71. So in contrast to the last time, where because it was in the head, we could only parse the first six lines, we're now able to parse the majority of the document before that script ever arrives. And as a result, we get a faster paint out. Now this also impacts script priority on the network. Chrome will now give this a medium priority. They demote it. You'll see it's in line with that YouTube script, which was included further down the page as well. This means it's less likely to compete with some of those higher priority resources for bandwidth. Safari, Firefox, they actually don't do anything interesting here at all. They keep the same prioritization as before. So at first blush, this is looking a little better, right? We have a little bit reduced priority, and we also are able to parse most of the document and get something painted out onto the screen. But this is one of those situations where the simplicity of our demo page is kind of hiding a few issues. So the code that you see on the left here, that's the entire HTML markup for this demo page. That's it. That's all that's going on. The code that you see on the right, that is the entire markup for the walmart.com homepage. Now Walmart is not a particularly terrible offender. They're actually very middle of the road when you look at this aggregate data from HTTP archive or anything like that. But the difference here is between 2.2 kilobytes of raw code and 198 kilobytes of raw code. Now, if the browser has to parse almost 200 kilobytes before it finds that script element, that means that we're being discovering that script element very, very late in the game. And as a result, it's starting the download process and starting the parsing process much later than it potentially could be otherwise. So we're getting that lower priority, but we're also getting that late discovery which pushes it off and could potentially negatively impact our performance. So we're in a little bit better position, but we are getting, in terms of competing with critical resources, we're a little better there, but it is discovered very late and we can do better than this. So let's try take three. Let's try dynamically injecting the script. So in this case, we've got an inline script that's gonna create a script element, set the source attribute, and then append that to the head. The nice thing about dynamically inserted scripts is that they are treated as asynchronous by default, which means that while that script is being downloaded and requested, it's not going to block our HTML parser anymore. It also means because we've told it it's basically treated as async, so it's not quite as important as a blocking, Chrome can give it an even lower priority. So Blink-based browsers will push this down to a low priority now, means it's even less likely to contend with bandwidth for more higher priority resources. Again, Safari and Firefox don't do anything interesting here. They keep it the same as it was. But because we have that async behavior, you can see the main thread is doing all sorts of work here. We're able to actually parse the entire document. We're not getting belayed in any way. The film strip shows that we're getting that first paint out nice and quick. Again, we're looking like we're doing much, much better with this approach, but we'd still have a couple of gotchas. So first off, if we zero in on this network activity and we notice the order things are being requested, we've got our CSS, we've got our GIF of Phil dancing, we've got the YouTube request, and then the fourth file being pulled in or queued up is the slow JavaScript file we've requested. But if you look at the markup, that inline snippet that triggers that dynamically injected script comes before all of that. It should actually be running before any, the document even gets to parse the rest of the stuff, the browser gets to parse the rest of the page. So what's going on? Like, why are we still seeing the image and the YouTube script and things like that? Well, it's because browsers have what's called a look-ahead or speculative parser. And its job is to look ahead and not really construct anything about the DOM or anything like that. We're specifically looking for other resources that we know we're going to need to load so that the browser can start loading those proactively. So that's where it looks and it finds this image. It says, hey, we're gonna need that. Let's grab that. Keep looking. Oh, we've got a YouTube script. Let's queue that up for download two. The preloader, which is fantastic, but it can only find things that are in the markup. And our script is not in the markup. Our script, you have to execute JavaScript to be able to get to it. To be able to find ours, you have to evaluate and compile that inline script and you have to run it. And then you see that send request happen finally for our dynamically injected script. So as a result, it's queued up pretty late. It's actually discovered even later than our script at the bottom example that we just looked at because now we have to execute the JavaScript as well to get to it. It also surfaces another problem that we've actually been able to ignore so far. If you look at the network in terms of when things are lining up, this pink line shows you when the CSS has been arrived and applied. It also shows you when the request for our slow file JavaScript is kicked off. That JavaScript snippet doesn't run until all the CSS arrives and the reason is we mentioned that JavaScript can mess with the DOM. JavaScript can also mess with the CSS object model. It can add and change styles. And again, this is an expensive process. So once again, browsers would rather not have to construct the entire CSS object model only to throw it all out the window because of something JavaScript did. So when there is a CSS request on the network, when we're downloading CSS and there's JavaScript that needs to be executed, the browser will not execute that JavaScript until the CSS has been arrived and parsed. So the CSS blocks our JavaScript execution and the JavaScript execution blocks our parser. So to make this a little clearer, again, Chrome DevTools, we can look at this. We can add a slow file request for a CSS as well. That's that nice long purple bar in the network section here. And you can see that we parsed HTML through line 12 and then we stop and nothing happens. That main thread is empty until that purple bar is complete, until we've got our CSS. Only at that point do we execute the inline script, which then triggers the request for the dynamically injected script. And then we get to go on with our lives. This is actually the case for script behavior by default. This isn't unique to dynamically injected scripts. It's just that the way we were loading it before, we were able to ignore that performance issue because it just wasn't rearing its head. The fun thing about performance issues is often it's a little bit like whack-a-mole. You never know when they're gonna pop up. But in this case, so what happens is we've got, we've turned CSS now into a parser blocking thing. And for the record, I see this a ton in production. Like this is all over the place with inline snippets right after style sheets that end up blocking parsing. So in this approach, we're blocking the parser. We're discovering it very late. We've got work to do. So let's try the next approach, take four. Let's use the async attribute. So async again gives us that same low priority in Chrome. Excitedly, we actually get a low priority in Safari. We get another browser changing a priority, which is fantastic. So Firefox doesn't do anything. But we get the low network priority in Safari and in the Blink-based browsers. So the contention against other critical resources is very low. It's in the markup. So you can see if you look at the network, they're all lined up. They're all being requested around the same time. And because it's async, it means that again, we're able to parse all that HTML. Things are looking great. But again, we've got a couple gotchas. So our example so far has been one script. But what if we add another script? In this case, another JavaScript file from slow files that's gonna arrive a little bit faster. Now, if you care at all about execution order, async is out the window. Because what async does is as soon as that file arrives, whenever it arrives, regardless of what order it was requested, it's going to execute. So in this case, we've got the second file arriving much earlier. It gets executed as soon as it's there without regard to anything else that was requested. Async does not guarantee loading order in any way, shape, or form. And that's because with an async attribute, what we're telling the browser is we're telling it, put the downloading of that script in parallel with HTML parsing. But then pause. As soon as the script arrives, I want you to stop whatever you're doing and execute that script. And then move on with your life. So if that async script is arriving partly through HTML parsing, like we haven't completely parsed the HTML document, we're still parser blocking at this point because we're telling the browser, execute this script as soon as it's there. So async scripts also block the parser when the script arrives. So I've seen countless stories of folks who have started with a large async script, made that script smaller, and their paint metrics ended up getting worse. Like their first paint times go down. And it's, I guess, go up the other way. But it's because what's happened is even though they've made the script lighter, which seems like a great thing, it's now arriving earlier, which means it's not letting the browser parse all the HTML, which means we're blocking the rendering of our page. So again, with async, we've got no guarantee of order. We're still blocking that parser, potentially render. We gotta do better than this. So, take five. The defer attribute. So the defer attribute is like the nuclear option here. With defer, we're saying, move everything in parallel and don't execute the script at all until you're completely done parsing the HTML and creating the document object model. Basically, we're doing this right before DOM content loaded. So if we look at Chrome DevTools here, we see the file arrive, and we see that it's executing right away because of how long it took, but it is the last, literally the last thing that has to happen before the browser can fire the load in DOM content loaded events. So with defer, we're basically saying it's even less of a priority than async. Async, interrupt the parsing to execute. Defer, just wait until the end. As you would expect then, Chrome gives us a low priority still. Safari gives it a high priority. I don't get it. I honestly, I don't understand it. It possibly is a bug. You know, I have to chat with them and see what's going on there. Or there could be some reason for this that makes sense to somebody smarter than me. But on the surface, it doesn't seem super clear. So unfortunately, we lose a little bit of our network priority, the niceness here, because Safari bumps that back up to a high. But we do get the preservation of order. Like if we've got two deferred scripts, the browser is going to wait until execution. You can see there's no execution here when the first script arrives, because it knows not to execute until DOM content loaded is right ready to be fired, which means it can queue up that execution in a nice orderly fashion. So we're certainly better here. We can guarantee order. It doesn't block the parser or the render. It's discovered early because it's in the markup. And because of that low network priority, at least in Blink-based browsers, it doesn't compete with other critical resources. But what if we wanted to? What if we want that network priority and the parser benefits? And this isn't a hypothetical. Let's say that you're doing server-side rendering and you've got a client-side framework that's going to kick in on top of that. You don't want to delay render. You want that to happen as quickly as possible. But you also want the script to be downloaded quickly so that as soon as that first render happens, as soon as we're ready, that script can execute immediately and get that interactivity kicked in. That's where you do take seven. That's where we do the preload paired with the defer. So what preload is is it tells the browser, hey, there's this resource. You're going to find it. We're going to need it eventually. Why don't you get a headstart, download it right now, and give it a little bit of a bump in priority? So in Chrome, we see that bump up to a high priority again. Safari and Firefox, again, just keep it as always at that high and normal. But now we get sort of that best of both worlds if we want that to be prioritized a bit more. We can see that everything's queued up on the network right. The main thread is able to go. We get the benefits of defer, but we also had that higher priority. So it means that it's going to be downloaded right away and arrive a little bit earlier so we can load it right away as soon as that page has been rendered or ready to be rendered and displayed. So loading scripts, it's all about understanding the trade-offs. The way that we load scripts dictate network priority and how the browser parser behaves. And it seems like a simple thing, but it can have massive consequences for performance. And honestly, there's no, it's not a one size fits all. Like sometimes you want the blocking behavior of a script. If you're using client-side rendering or client-side A-B testing, hopefully you're not doing either of those things. But if you are, then you want the script to block that initial display. You don't want to display something and then change it all up. If you have stuff that's not critical to that user experience, maybe third-party scripts that you can push off to defer and keep that low priority, that's where defer works. And again, we mentioned the server-side rendering approach with a client-side framework, where that's where a defer and preload comes into play. It's all about experimenting. And believe it or not, this is just scratching the surface. Chrome in particular has a ton of different ways that JavaScript gets prioritized on the network and on the main thread. And we're experimenting with new methods all the time. It really comes down to the same thing I tell folks for any performance optimization. Don't take whatever you hear as gospel. There's always going to be trade-offs. So when you're tweaking the way that you're loading scripts or applying any optimization for that matter, make sure that you take the time to experiment, to measure the impact of that experiment, and then always, always iterate on it to ensure that we're providing the best experience possible to our users. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you four more. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you and good bye. Thank you. Thank you. Thank you. Thank you and good bye. Thank you. Thank you. Good bye. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Yes. Thank you. Thank you. Good bye. Thank you and good bye. Thank you. Thank you. Good bye. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Have a nice day. Thank you. Thank you. Have a nice day. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.