Menu
O mnie Kontakt

W najnowszym filmie na kanałach leerob, autor omawia proces samodzielnego hostowania aplikacji Next.js na VPS (Virtual Private Server) za jedyne 4 dolary miesięcznie. W projekcie wykorzystano Docker, bazę danych Postgres oraz proxy reversowe Nginx. Autor podkreśla, że do wytrwałej pracy wystarczy domena i VPS. Podczas tutorialu zostaną przedstawione różnice między VPS a dedykowanym serwerem, a także omówione dodatkowe opcje hostowania. Cały proces konfiguracji został stopniowo przeprowadzony od zakupu VPS po uruchomienie aplikacji i powiązanych z nią komponentów.

Pierwszym krokiem jest wybór VPS, co autor wykonuje na platformie DigitalOcean, która oferuje przyjazne dla kieszeni opcje. Podczas zakupu możemy zobaczyć wykresy monitorujące użycie CPU oraz RAM. Po zalogowaniu się na serwer, omawia różnice między VPS a dedykowanym serwerem. VPS jest tańszy, ale ze względu na współdzielenie zasobów z innymi użytkownikami, może mieć ograniczenia co do wydajności. Dedykowane servery z kolei oferują większą kontrolę i wydajność, co może wiązać się z wyższymi kosztami.

Następnie autor przechodzi do instalacji Postgres i wyjaśnia, jakie są różnice między hostowaniem bazy danych na infrastrukturze współdzielonej a dedykowanej. Autor zamieszcza również cenne spostrzeżenia na temat skalowania infrastruktury zarówno wertykalnego, jak i horyzontalnego, podkreślając korzyści z obydwu tych metod. Kluczową różnicą jest to, że skalowanie wertykalne jest łatwiejsze w administracji, ale niesie ze sobą ryzyko utraty dostępu do danych w przypadku awarii, podczas gdy skalowanie horyzontalne zmniejsza ryzyko awarii, ale może być bardziej skomplikowane.

Gdy podjęto kroki w celu uruchomienia całej infrastruktury, autor pokazuje, jak skonfigurować aplikację Next.js przy użyciu Docker Compose. Autor szczegółowo opisuje, jak działa skrypt wdrożeniowy, obejmujący instalację wszystkich potrzebnych komponentów i rejestrowanie danych do odpowiednich plików konfiguracyjnych. Zwraca przy tym uwagę na optymalizację obrazów, korzystając z możliwości, jakie daje serwer Next.js. Przy pomocy live demo autor pokazuje, jak działają różne funkcje aplikacji, takie jak server-side rendering oraz przyspieszona wydajność dzięki buforowaniu.

Na zakończenie, autor krytycznie ocenia analizy widea, omawiając statystyki tego tutorialu. W chwili pisania artykułu film na kanale leerob ma 114231 wyświetleń oraz 4939 polubień. Widać zatem, że temat samodzielnego hostowania aplikacji Next.js cieszy się dużym zainteresowaniem w społeczności, a wiedza ta rozwija zarówno umiejętności techniczne, jak i świadomość sądzę istotnej dla deweloperów. Ta reprezentacja procesu hostowania na VPS może być niezastąpiona dla tych, którzy chcą oszczędzić na kosztach chmury, ale jednocześnie skorzystać z elastycznych i trwałych rozwiązań.

Toggle timeline summary

  • 00:00 Wprowadzenie do samodzielnego hostowania Next.js na własnej infrastrukturze.
  • 00:04 Plan wdrożenia obejmujący aplikację Next.js, bazę danych Postgres, reverse proxy Nginx.
  • 00:10 Wymagania: nazwa domeny, VPS, instalacja Dockera.
  • 00:18 Przewodnik dotyczący zakupu VPS.
  • 00:29 Przegląd komponentów do skonfigurowania na serwerze.
  • 00:40 Zarys: konfiguracja serwera, VPS vs serwer dedykowany.
  • 00:53 Wprowadzenie do skryptu wdrożeniowego i funkcje aplikacji demonstracyjnej.
  • 01:03 Szczegóły na temat konfigurowania Next.js dla samodzielnego hostowania.
  • 01:10 Omówienie kompromisów zarządzania infrastrukturą w porównaniu do korzystania z usług chmurowych.
  • 01:20 Wprowadzenie do funkcji aplikacji demonstracyjnej, w tym optymalizacji obrazów.
  • 01:42 Demonstracja Inkrementalnej Regeneracji Statycznej (ISR) z nową zawartością.
  • 01:51 Middleware, zmienne środowiskowe i konfiguracja serwera.
  • 02:06 Szczegóły dotyczące opcji VPS, w szczególności korzystanie z DigitalOcean.
  • 02:25 Kompromisy w kwestii przystępności VPS i wydajności.
  • 02:42 Konfiguracja Dockera i wdrożenie aplikacji.
  • 04:55 Strategie skalowania pionowego vs poziomego dla infrastruktury.
  • 06:05 Wyjaśnione opcje skalowania.
  • 07:12 Konfigurowanie infrastruktury Dockera dla aplikacji.
  • 11:46 Uruchamianie skryptu wdrożeniowego Dockera i budowanie aplikacji.
  • 14:10 Potwierdzenie działania bazy danych i aplikacji Next.js.
  • 16:20 Ostateczne sprawdzenia funkcjonalności serwera i powiązania domeny.
  • 17:38 Dyskusja na temat funkcji demonstracyjnych, w tym pobierania danych i optymalizacji obrazów.
  • 33:40 Refleksja nad kompromisami między samodzielnym hostowaniem a zarządzanymi usługami.
  • 38:41 Podsumowanie zarządzanych usług Vercel w porównaniu do samodzielnego hostowania.
  • 44:39 Zakończenie i zachęta do opinii od widzów.

Transcription

Let's walk through how we can self-host Next.js to our own infrastructure. We're going to deploy a Next app, a Postgres database, an Nginx reverse proxy, and more all to our $4 Linux VPS or virtual private server. All you're going to need for this tutorial is a domain name, a VPS that I'm going to show how you can purchase, Docker on your machine, which you can install on Mac with Homebrew, and that's pretty much it. We're going to SSH into our server and we're going to put a lot of stuff in there. We're going to put a Next app, the reverse proxy, our database, and I'm going to walk you through the whole process from start to finish and talk through some of the trade-offs as well. This server can fit so many containers in it. So quick little outline here. First, we're going to set up our server. Then we're going to talk about a VPS versus a dedicated server, some of those options. We're going to go through this deploy script that I've created. I'm going to go through some of the features of the demo app that I built, including pretty much all of the things that you all asked me about, how you configure them in Next.js when you're self-hosting. We're going to cover all of those. We're going to talk about some of the trade-offs of rolling your own infrastructure and managing that yourself versus using a cloud service. And then we'll also talk about some other options for self-hosting as well at the end. As always, the code is open source if you want to check it out and deploy your own. Before we get into it, I want to show the demo app we're going to be deploying in this tutorial. So it's got some of the features that you all have requested, optimizing images on the Next.js server, doing streaming with server components in the app router and using suspense boundaries. We've got a Postgres database. It's got some items. Nice. We've got caching and ISR set up. So if I view this demo of ISR, it's been fresh for 130 some seconds. I reload. Okay, now we've got fresh content and revalidated in the background. We've got middleware. We can run some code when the server starts up, environment variables loading, a whole bunch of stuff. We're going to get into all of those and show some code here in just a bit. So stick around. Okay, so first things first, we need a server. We need a virtual private server. There's a bunch of different places you can purchase hardware from. In this tutorial, I opted to use DigitalOcean. I actually didn't know that they offered a $4 a month VPS. It has very minimal hardware. I mean, 512 megs of memory is not a lot. But there's some pretty good options here if you're familiar with DigitalOcean already. There's also Hetzner, which has really, really affordable cloud infrastructure. Some of these are just ridiculous for how much hardware you can get for the price. Obviously, there's trade-offs to that that we'll get into, but there's some pretty good options here if you want to check out Hetzner as well. In this instance, I've went ahead and purchased a droplet or a VPS from DigitalOcean, and I have this turned on and set up in my account. You're going to get access to a lot of things. The first thing you're going to see is these pretty nice graphs, which I think this is a recent addition. It helps you monitor CPU usage, memory usage, your disk size. You can go in here and purchase some additional features or learn more about your server. But for right now, the main thing we need is this IP address. This IP address is how we're going to be able to go into our server and run some code. So let's take our IP address. I'll just copy this, and we're going to go into our editor. For this tutorial, I'm using NeoVim. You don't have to use NeoVim. You can use whatever editor you want, but I've got my demo application on the top, and I've got two terminals on the bottom that I'm going to use for some other things. So first, let's actually connect to our server. So I'm going to do ssh root at, and then paste in our server IP. This is going to ask me for a password, so I'm going to paste that in. So I'll hit Enter, and we are in. So just like that, we're connected into our Linux server. This is going to be our playground, our sandbox for where we're going to set up our application. Before we do that, I want to quickly talk about some of the differences between virtual private servers and having your own dedicated server hardware. The biggest difference is that in a virtual private server, you have multiple clients who can all connect into the same hardware. So you're sharing that hardware amongst multiple tenants. On a dedicated server, of course, you're paying for that exclusivity. That's part of the reason why you can get VPSs at such an affordable rate, is because it's very affordable for these providers to offer this shared infrastructure. Now, there are some trade-offs here between VPSs and dedicated servers, namely in scalability, cost, and performance, overall control of your hardware. That's not to say that a VPS can't scale, and we're going to talk through some of those options here in a little bit. But generally, you can think of a dedicated server as being a little bit more expensive, being a little bit faster hardware, but getting a lot more control. That doesn't mean that a VPS is a bad option. It's just sometimes you need that beefy buff dog on the left there. Now, we're going to be deploying a Postgres database inside of our VPS. And it's worth noting that a lot of this applies to databases as well, too, namely in whether you host it on shared infrastructure, whether you have dedicated infrastructure, or you use a managed service. There's differences in reliability and ease of use and how much time you're spending yourself. If you're not really doing a lot with your database, and you're not having a lot of heavy IO input output, you're not doing a lot of reads and writes, it might be fine to host that on your same server. Some VPSs offer things like backups and snapshots of your database. For others, you need to have managed hardware or dedicated hardware that's going to take care of that for you. And then going all the way to managed database services, they're going to add a bunch of things on top like automatic backups and monitoring tools and all sorts of good stuff. I just wanted to quickly mention that because we are deploying a Postgres database, which is a bit chunkier than others. If you really want like the most lightweight database, you might check out SQLite as well, or SQL, as some say. Now there's two main ways of scaling your infrastructure, vertical scaling and horizontal scaling. With vertical scaling, you can basically give your hardware more resources, more CPU, more RAM, or more storage. And hardware is getting pretty good, so this can sometimes take you pretty far. On the other hand, you have horizontal scaling, where you can add more containers to your application and then have a service in front, like a load balancer, that's going to be able to route traffic between the different containers. Now, the biggest pro of vertical scaling is that operationally, it's just a little bit easier. But the con is that it's a single point of failure. So if you have something happen to your container, well, you're not going to have a good time. In the horizontal scaling instance, where you have multiple containers, this can prevent that single point of failure, and it can also enable zero-time deploys because you can have all of your traffic routed to one container at 100%. You can start a new deployment on another container, and only when that's finished, we can then route traffic over. So we're going to talk through both of these a little bit more, but I wanted to kind of lay the landscape for how to think about scaling your infrastructure. Okay, so back in your editor, back SSHed into your Linux server, we're going to go ahead and set up all the infrastructure for your application using Docker. If you want to follow along, we have all the steps in the readme, and I'm also going to do it live here and talk through what's actually happening in the deploy script. So first, I'm going to run this curl. I'm going to copy that and go to our terminal, which is going to take that deploy script and put it on our server. Then we go back, and we're going to take this and change the permissions on our deploy script so that it's executable, so we can run it. And then we can now actually run the deploy script. But before we do that, I want to go into the deploy script. We're going to talk about it a little bit. So this is a bash script. This is not something that you necessarily have to use. It's more so an educational resource to teach you more about how all of this works. We'll talk through some other options as well at the end, but hopefully this gives you a better understanding of how everything works under the hood. So first, we have some environment variables at the top. We have our Postgres user password database name. We generate a random password for you. We have some other secrets for your application that are just for the demo app. And then these are the two that you're going to want to change, the domain name and then the email. So on your server, the way you can do this, a really nice way of doing this, is with vim. So in my SSH into my server, I can actually do vi deploy.sh, and that's going to open up this file. I can navigate down and up with J and K, and then I can go over with W, W, and OK. Now I'm on this value that I want to delete. I can do change in quotes. So CI and then the quote, I can change that value. Now I'm in insert mode and I can say myfancydomain.com, hit escape, which I have mapped to caps lock to go from insert mode into normal mode. So I can actually move back around in my file. And then kind of same thing here with our email. We want to change this out. So I can do change in quotes again and change this to whatever I want, hit escape. You can do U for undo if you mess something up. And then when you're ready to finish, you go into command mode with colon, write, quit, and you're done. If you want to learn more about vim, I have another free resource on my GitHub that walks through some of the commands. So now you've subbed this out with your own values. Now we can actually run the deploy script. And as this is running, I'm going to talk through some of the things that are happening behind the scenes. So it's going to go and clone our repository. So I've got some variables up here at the top. We are going to update all of the packages and install the latest. You can think of apt kind of like NPM, so it's a package manager you can use. We're going to set up some swap space. So this is a good time to talk about the $4 VPS woes. So my $4 VPS worked pretty well. And then eventually I actually ended up running out of memory. And that was because I had only 512 megs of memory. So setting up swap space allowed me to make it easier for me to run builds and not run out of memory. So we're going to make that permanent actually. Next, we're going to go and install Docker and again, use app to install all the things that we need. Going to go back over to here and we're going to hit enter so that we can continue on our script. Awesome. Okay, we're going to install Docker compose, make sure that that's done, change the permission on it so it can be executable, make sure it's in our path. I'm going to fly through this because you don't necessarily need to know all of it, but it helps you understand the general shape of things. Then we're going to start Docker when our server starts up, which is great. Then we're going to clone the Git repository, which if you were falling along down the bottom, that actually just happened. And now down at the bottom, we're seeing the Docker build starting. So clone the Git repository. We're going to set up some environment variables that we end up catting or outputting to a .env file. Awesome. We install Nginx, which is our reverse proxy. We set up SSL certificate using cert bot. And then here is where we cat out the Nginx configuration, which is going to have rate limiting set up. It's going to have our SSL certificate. It's going to have proxy buffering disabled so we can do streaming. And yeah, now we get down in here to where we're actually going into our directory for our app where we cloned it. We're running Docker compose, which is happening right now. You can see it's running the build in our application and we make sure that it worked. And then when it works, it outputs this message, deployment complete. And here's all the environment variables that got created. So let's just scroll up for a second here and see everything that happened. So here's the Git output where we cloned our repository. We can see all the things that were installed. Great. And now we see our Docker image that's getting built and ran. So while this is going, let's actually talk about what that Docker image is. So the Docker image for my application, it's a multi-stage Docker file. The first stage, we're going to install all the dependencies. The reason you have multi-stages is so that if nothing has changed in the first stage, it can be cached and it doesn't have to run again. The second stage is we're going to actually build our application. So we're going to go into this directory. We're going to copy over all the known modules and then we run our build command. And then finally we have our production server. So we copy over everything that we built, like the public assets, the standalone assets for our server, which we're going to talk about a little bit later, as well as exposing our port and running our server. So back in the bottom left, you can see we're almost done running our build for our application. While this is running, I'm going to talk a little bit about what this next standalone bit is. So if I go to our next config, I highly, highly recommend that you use output standalone. This is going to help reduce the output size of your Docker image by about 80%. It really only includes the stuff that you need. So output standalone, highly, highly recommend that. That's why we were back in our Docker file. That's why we were coming from this next standalone directory. And if you go to package.json, you'll see that my start command is actually starting up this standalone server as well. So down in the bottom left, you can see everything finished. It looks like things started successfully. We have our database, we have our Next.js app, and hey, I also have a cron job running. That's pretty nice. And we see our .env file was created with all of the values that we need. Now you see, I had three containers. How did I set up three containers? Well, to do that, I have a Docker compose file. This is going to list out all the services in my application. So the first one is the Next.js app, which is depending on our database. And you'll notice that for both the web and the database containers, I'm using this network. The network is how these two services can talk to each other. So we've got our Next app. We've got our Postgres database, which has some envvars and ports. And again, it has this network. And then finally here, we have a cron. The image it's using comes pre-installed with curl, so you don't have to install that manually. And then it's using just built-in features of a server, cron-d, so that we can run a cron and periodically ping this endpoints. You'll notice that the address here is web. So we're calling back to that first service, slash db slash clear. And this is going to run every 10 minutes and just clear out our database. And that's pretty much it for the Docker Compose file. So let me go back to our server. Seems like everything is successful. Our application is up and running. Let's just make sure that everything looks right. So I'm going to go cd into my app, change directory. And this is the name of the folder we chose. You can choose whatever you'd like. So now we're inside of this app. I can do docker ps. I can see the processes running. We can see we have these three running two minutes ago. But what I really wanted to show is if I list out all of the files here, we had that .env file. And I want to make sure that that got the right values. So let's do a cat.env. We see that we have our Postgres user, password, database, awesome. The database URL that kind of puts all those pieces together and then the other environment variables that were created. So that all looks good. That looks good. So if we go back to our browser, we can just confirm that nextselfhost.dev is up and running. If I reload the page, I get a new Pokemon. So everything is working as expected here. We have not only our next app running, our database, but also our reverse proxy. I've already pointed my domain's DNS, the A record from nextselfhost.dev to the IP address on my server. That's something you'll want to do as well to make sure that you can access it at the domain name that you have purchased and used. But that's pretty much it for the deploy script. Because we're using Docker, and this is something I want to highlight, this is very portable. You don't necessarily have to use a VPS. You can use your own dedicated hardware. You could use a managed container service where you just bring your Docker images and it can scale that for you. Some good options are things like Google Cloud Run or other platforms. So Docker is really a great skill to learn and just be aware of, because it's going to allow you to take control of your infrastructure and make it portable between multiple places. Okay, so we set up our Linux server. We talked a little bit about a virtual private server versus dedicated servers. We ran our deploy script. Our application is up and running. We've SSH into it. Now let's actually talk through some of these demo features, talk through how we configure them when self-hosting inside of Next.js and kind of how things work. So the first one is data fetching. I'm able to fetch this random Pokemon. It runs, this page is server rendered. So I get a new Pokemon from API on every single request. So being served dynamically using a server component, I don't actually have to configure anything here. I just wanted to show that as a demo. The second one is image optimization. So by default, when you use Next.js, you can optimize images on the Next.js server. So if I go over to our network tab and reload the page, what you're going to notice is this right here, which is the request back to our domain slash underscore next slash image. So you basically route back to itself, back to the server. You pass it the source of your URL. In this instance, it's a remote image from Unsplash. And what Next.js is going to do is it's going to take that raw JPEG or PNG, and it's going to optimize it and use more efficient formats, whether that's WebP or AVIF, if you would prefer. And it's going to add, you know, proper cache control headers on there, which you can configure if you would like, and just help your page load a little bit faster. Now, if you don't want to use this, you have some options. I wanted to quickly mention too, in Next.js 15, we made it so that you don't have to manually install Sharp on your server to do image optimization. In previous versions of Next.js, we recommended that you would install Sharp. It's going to be a little bit more memory efficient, a little bit better for your self-hosted infrastructure versus the previous WebAssembly-based version. So now that's just one less thing that you need to do. But let's say, actually, I don't want to use the built-in Next.js image optimization. How could I bring my own? So inside of the page file for my index route, I actually have that image that I'm using. You'll notice I have the source of Unsplash, and I'm not configuring anything else. So I'm just using the default image loading here. And if I go to our next config, I can see under the images config, I have a few options. I'm using this remote patterns so that we can tell Next.js where we should allow images to be optimized for. You're going to want to be as specific as possible here so that you're only allowing your server to be used for images that you want it to optimize for. So try to make this as specific as you can, and also highly recommend this new option for search or the query params. If it's an empty string, it's just going to not allow any, but that helps prevent excessive usage of image optimization if somebody tries to add a bunch of different query parameters onto the end of your service. Now, let's say that you don't want to use the default image optimization. What you can do is take this loader and loader file. We're going to uncomment this, and we can do a custom loader, and we can provide the file that we want. So let me go to image loader TS here. This client component allows you to basically take the source for your image and kind of construct it in any way you want. So you get the source, you get the width, you get the quality, you're going to return back what gets forwarded to the image component. This could use something like Cloudinary or some other cloud service, or it could also be your own image optimization service that you have on a different container or on different hardware somewhere else. Okay, now let's look at streaming server components. So in this demo, we have an async component, and then we're using suspense around those components to delay one for every second. So one, two, three, four, five. And then if you remember from our deploy script, when we set up our reverse proxy, we disabled proxy buffering. We don't want to buffer responses. We want to be able to stream responses, and that allows us to be able to send in this content in chunks. So back in our editor, if I go to streaming, go to this page, start down here at the bottom. So we have all these different suspense boundaries, and each one of them, if we go up to here, has this loading card, which is just displaying some information. But the async data component is going and fetching some data and then displaying it. And all we're doing here for fetching data is just doing a promise with a set timeout of a second. Now, one thing to note here is that if we go to our config file, we want to let Nginx do the compression. So we're going to disable compression in our next app so we can prevent the buffering of the streaming responses. Now let's look at how we can talk to our Postgres database that we set up. So in this demo, I'm using Drizzle as an ORM to connect to our database. And if I click on this, we're going to see some values I've already saved. I can add something new. I can reload the page. I can delete things, just basic CRUD actions. And I go check out our editor. Here's the page for that file. We are going and fetching all the to-dos from the database. We have a form that uses a server action to add a to-do, and then we loop over all of the to-dos and list them out. For our actions, pretty simple. We just have one to add, which inserts and then revalidates the path. And then the same thing with the delete to-do action. Now, if we want to just validate that this is working, we can actually go in here and we're going to go into our Docker container. So actually, let me go back to our README. I've included some kind of helpful commands here if you want. So one of them you can use is this docker exec, which is going to go into myappdb1. It's going to run psql with the user that you provided and the database name that you provided. If you've changed that, of course you'd want to change this here. So I'm going to take that and we're going to go into here and I'm going to enter into our Docker container for Postgres, and I'm going to go and run and connect to our database. So for example, I can see the users I have. I can see the tables I have with run row in my to-do table. So awesome to see that that works. That's pretty much it for Postgres. Okay, next we have ISR and just in general, caching in Next.js. And this is for both the pages and the app router. This is probably one, I think, I've seen the most questions on that people have the most confusion about. So I want to talk first about how it works by default, and then what you can do to customize it if you would prefer. So by default, the ISR setup in Next.js uses a least recently used cache. So things are cached in memory. And then you don't really have to configure anything here. This is just going to work out of the box. So let me show you an example of what this looks like. If I go here, this page has been fresh for 1200 seconds. And we have some information about the Pokemon, but the revalidation time, the freshness time of this page was actually 10 seconds. So when I landed on this page, it became stale. And when I reload the page, I get new data. I've been yapping here for too long, but now if I go back here and get this new value, and of course I can also click revalidate and manually revalidate it. I think I have a bug there with my timing and get new information back as well. So let's take a look at what this looks like. On my ISR page, let me skip first to here. So I'm getting a Pokemon, I'm getting the date it was generated at. And then I put some information out from the data I got back and I had this little freshness timer. And then I have a form also to call revalidate, which is going to manually call revalidate path on slash ISR. Or I can just have either a time on my fetch or something defined at the route segment level, which is a fancy way of saying like at this page. So I could do export const revalidate 10 as well. So fetching a Pokemon is gonna have a freshness of at max 10 seconds. Now, again, this works without having to configure anything. It's in memory, but this is on a single container. So let me go over to my next config. If you want, you can go in here and change the cache handler. So you can bring your own cache handler. So what you can do, let's comment that out so we can see it or re-comment it in. You can provide a cache handler file. And then we can say, actually we wanna disable in-memory caching. Now, when you do this, this allows you to configure it however you want. You can save this to durable storage. You can use Redis. You can do pretty much whatever you want. And there's actually a really helpful community package that presets up Redis using this custom cache handler. What I've done in this app is created a very basic version that saves a cache to a .cache directory. And it has some console logging, just so you can kind of see what this looks like. The bulk of this cache handler has a couple things. It has a get, where we go based on a key. We can try to go get something from the cache. It has a set. It has a revalidate tag. Now you'll notice there's revalidate tag, but there isn't revalidate path. Revalidate tag is the only thing here. That's because with the Next.js cache, the paths are actually just tags. So slash ISR is basically just a tag to the Next.js cache handler. Okay, let's go back to our next config. And I wanna make sure we save this file where we've uncommented and we've disabled in-memory caching for our custom cache handler. So I'm gonna save here. And then I'm also going to, in this other terminal, do fun run build and fun run start. So we're gonna do a production build, and then we're gonna start and emulate our production server so we can get the perfect semantics for what our ISR behavior will look like in production. So we'll wait for this here to finish and to start up our server. Great, let's clear out these logs. And I'm gonna go back to here. We're gonna do localhost 3000. We're gonna go to the demo. And we see zero seconds. So as a reminder, it was a 10-second period for freshness. So if I reload in here a couple times, everything's fine. But then once 10 seconds passes, okay, now it's stale. I reload, stale. Next one, we've revalidated in the background and updated the cache. So what I've done in this custom cache handler, basically, is provide some console logs just so you can kind of see what's happening behind the scenes. We initialized the directory. There was these cache misses. We set the cache data. And we can see even the tags that we provided. This underscore NT is an internal tag name used for Next.js. I reloaded the page a bunch of times as we see all these cache hits. And then it was stale. So we cleared out the cache and we invalidated it and set new data. So you don't necessarily need to do this. I just wanted to provide a little bit of visibility into how caching works behind the scenes, what this file actually looks like that Next.js is essentially doing this for you. And you can continue using the default in-memory caching for ISR. Okay, next up, we have middleware. So we have a slash protected route in our application that is protected by a cookie. So let me go to here and I'll pull up middleware. Pretty simple middleware. All it's doing is saying, hey, if we have the protected cookie, if that's equal to one, just redirect back to the index route. And this is only running on the protected route. So here I'm on the root route. I click view demo. It's just going to redirect back to itself. So what I want to do is if I go to application cookies, we're going to say protected and the value is going to be one. And now if I click on view the demo, I can actually see our protected page. Now, another thing that I'm showing in here, which is nice is an environment variable. So if I go to the protected page, all I'm doing inside of this client component is I'm reading this environment variable and displaying it on the page. Now you'll notice that it's prefixed with next underscore public. This prefix means that we are intentionally exposing this to the client side, to the browser. And we want it to be bundled as part of creating our build, creating our Docker image. And just to make that point more obvious, if I even go in here and do view source, and then I can search for safe key. So we see the label here, and then we see the actual value that's been included in that HTML for our page. I'm actually going to jump down to the bottom one, then we'll come back up. So the other bit on environment variables is the opposite of that, which is I only ever want these environment variables to stay on the server. The value here, process env, we're reading my secret. Let's go take a look at what this looks like. This is on the page here. So I have secret key equals process.env.secretKey. This is running in an async server component. This is not a client component. So we're able to read these values dynamically and use them without bundling them for the browser. Now there's another case, which I think is pretty common, which is the last bit in this demo, which is wanting to read some code on server startup. Next.js has this pretty handy file that's being stabilized in Next.js 15 called instrumentation. The primary use for this is actually for observability tools, but it can also be really helpful when self-hosting because you can run this code on startup, for example, to register and call some external API and set some values. So for this demo, what I did is actually to set up a global, and I have these environment variables for HashiCorp Vault, and I go ahead and make a request to my vault. I forward along the API key I have, and then I get that value back and I set it as a secret.APIKey with the type defined up here at the top. Now back in my page, that means that I can now read this value here and use it maybe to pass to some authorization header or maybe somewhere else in my application. This is a pretty common thing that you might wanna see. And we could, of course, make this better than putting it on the globals, but this is just one example of how you could do this pattern. The last two things in the demo I wanna show, one is the cron and the second is rate limiting. So for the cron, if we go back to our Postgres database, if you remember, it was clearing out the data every 10 minutes. So this should be deleted by now. If I go to view a demo, we see, okay, there's nothing here and I can still add new items if I want. This happens by this route handler, which our cron service calls and it does db.delete on the to-dos table and then revalidates the path. So that's pretty nice. The last thing I wanna show is rate limiting. So when we set up our deploy script, we added some rate limiting rules to our nginx-config. So if I go ahead and actually just spam a bunch of requests to nextselfhost.dev, using this load testing tool, we're gonna do two threads, 10 connections, run it for five seconds. We can see that of the 760 requests that were made in that time, 691 of them were blocked. So we have that rate limit in place that's gonna prevent some of that traffic. Of course, you can get much more complex with how you wanna prevent bad actors on your site, but just a pretty basic example here with how you can use nginx for this. All right, back to our outline. I think we've covered mostly everything. We're now through all of the demo features. I wanna talk about some of the trade-offs between this setup and using managed services, and then also go through a couple other options at the end. And to help visualize this, we're gonna go on a little journey here with some Pokemon references. I don't know why this worked well in my brain, so I've been building along with this to help kind of visualize what it looks like to scale your own infrastructure and kind of when you might choose different services. So go on this journey with me. We have our most basic Charmander version here of our VPS that has a Docker container that has all of the different parts of Next.js. It has rendering, it has image optimization, it has caching and ISR, all together in that one container. If you remember from the start, we also have another container with our database and another container with our cron service. Now, when your application starts to get a lot of traffic, and depending on the hardware that you want, you might want to break out these pieces. You don't necessarily always want your server spending time on rendering, having that time compete with doing your image optimization or reading and writing to memory for ISR. So one next step for scaling your infrastructure might look like vertically scaling your hardware and then horizontally scaling your different Docker containers. So for example, you might have one container. This is the Charmeleon. You have one that's doing just rendering. You've taken the image optimization using that custom loader that we talked about. You've split that out into its own service. So you could use a tool like IPX, which uses Sharp. And all that could do is like a standalone API basically that's optimizing your images. Or you could call a cloud service. So you can kind of mix and match the self-hosted versus cloud infrastructure if you want here. And the last part, we talked about that community Redis adapter or kind of rolling your own custom ISR caching setup. You could do that as well on a separate instance where you're kind of just splitting out that Redis box and you're communicating back and forth between Redis and your Next.js app. Now, if you keep going and you keep scaling and you want to go even further, then you might end up something like a Charizard where you've got this dedicated server, you're running Kubernetes, which is kind of like that horizontal scaling diagram we showed where you can have multiple different versions of your Next.js container. So you might have five of those containers doing rendering. You might have another one doing just image optimization, another one or two or three with your database, another one with Redis, all of these kind of working together. And what Kubernetes is gonna allow you to do is with your load balancer, with your proxy or your Nginx is you're gonna be able to send traffic between those different services that will give you that kind of zero downtime rollouts and a bunch of other nice features. Now, that's not to say this is the only way to scale this infrastructure, but it was kind of helpful for me to kind of put a pen to paper here or put pixels on the screen of what it might look like as you grow and grow, because there's nothing wrong with the Charmander version here. If your goal is to save money and to run something as cost efficiently as possible, this can be a really good option. It's just putting it on your own infrastructure. You're kind of trading off your own time to be able to configure some of that infrastructure yourself. And maybe you really like doing that. That's totally fine. What I noticed a lot of people asking me when I said, hey, I'm gonna make this video about a VPS. A lot of folks were asking, well, you work at Vercel. You obviously work for a company that's a front-end cloud that's doing a managed provider. Why would you teach people how to do this? And I think it's really important that people in the Next.js community have options. They're able to understand and know how the infrastructure works. They understand that this open source framework is portable and it can go multiple places. And I also think it's helpful because you can then understand a little bit more of what Vercel is doing behind the scenes for you in terms of the infrastructure setup. So just to kind of visualize that a little bit, so far we've been talking about single container, single region application, and that can get you pretty far for a lot of apps. If I'm building my blog, maybe it doesn't need to necessarily be fast globally. What Vercel is gonna do, even on our free tier, is we're gonna help make your application fast all over the world. So a user makes a request to your website. We have a firewall at the network level that's gonna first take action at the point of presence level. So there's hundreds of points of presence around the world. Next, it goes into one of our regions like US East or one in Europe or APAC. And then we go into basically a managed Kubernetes instance for you. So everything that we talked through here with kind of scaling up your infrastructure, it's kind of like what Vercel's DevOps team is doing behind the scenes for you is we have this mega Charizard that is running Kubernetes for you. It's handling HTTPS, it's handling TLS. It has some default system DDoS protection with our firewall. You can go in and add custom firewall rules for rate limiting or blocking IPs or ASNs or Jaa4 Digest or really anything you wanna use. We have our own kind of routing system that is this proxy that can do redirects or rewrites, add headers, run some middleware at this router level versus running it down at the compute level. We have a cache for your page responses and ISR, which is kind of the separate service that's been plucked out like we talked about up here. We have this separate service. And then it makes requests to other pieces that are kind of also vertically scaled. They're horizontally scaled across your application. So you have the compute here, which are Vercel functions. You have image optimization, which is a separate service. And then you can also store your assets in object storage. So across all of these different vectors, then you can take each one of them and vertically scale them as well. So you make 10,000 requests to a site on Vercel, we're gonna have more functions that run. And you can make multiple requests that go into one function. So what Vercel is trying to do is just make it very easy and scalable for you to take your infrastructure and run it and just kind of focus on building your application. But I think it's important to call out, like sometimes you don't need the mega Charizard X, you might just need this Charmander and that's fine. And I think you should have options. And hopefully now after walking through this, you kind of understand this infrastructure a little bit better. You understand what Vercel is providing, how to set this up on your own and ultimately walk away feeling something like this. I mean, we are so lucky to have so many different great options for managing your own infrastructure. The hardware has got so fast. There's so many great tools that you can use or you can use a managed front end cloud like Vercel which has a free tier, has a $20 a month tier if you wanna scale even further. And both of those options I think are really great. Going back to some of the differences between managed cloud services versus rolling your own infrastructure. I would say the biggest one, at least for Vercel is this idea of framework defined infrastructure. When you're deploying a Next app to Vercel, you're actually not writing any of that deploy script. You're not writing any infrastructure as code like Terraform. You're just writing your Next.js app. You're just writing the framework code for cells taking care of all of the other bits of actually hooking up the pieces of your code like the ISR caching or the rendering of your pages and streaming those functions. We're kind of doing all that stuff for you so you can just get back to building your application. And I think that's kind of one big vector here. The second one is around cost controls. So you wanna be able to predictably understand how much you're spending and kind of get visibility into that spending as well. And we recently launched spend management which allows you to set both soft and hard caps on your account. So if you only wanna spend $60, you can only spend $60 and be able to pause your site when you've exceeded that traffic. That's kind of similar to, oh no, I've ran out of space on my VPS or my hardware can't keep up for this demand. So in the VPS case, your site goes down, but you maybe are actually okay with that because you didn't wanna spend more than $60. So kind of a similar thing that you have that off button if you really want to. I think that's really important for cloud platforms to have. And we have a whole bunch of other tools for getting alerts and visibility and observability into your usage and into your bills. And I think that's an important part about this as well. Another thing to mention is that if you don't wanna run a server at all, you can just deploy your next app as a set of static assets like a single page app. So just the HTML, JavaScript, CSS files, that's even more portable than having a VPS. You can basically put that anywhere. That's also an option that exists if you wanna do that, but it does limit the features you can use. Of course, you can't server render a page, you can't use ISR, you can't do image optimization without using a cloud service. So it's just worth noting that it's a little bit different if you're gonna go that approach, but it does exist if you wanna build kind of a single page app only application. There's also other adapters in the community for taking the Next.js output and deploying other places, whether that's other cloud platforms or maybe deploying it to serverless infrastructure on AWS. Shout out to SST for their work and community here for helping push that forward. There's also other options that you can use to give you kind of a Vercel-like experience if you're deploying to the cloud. One of them is called Coolify, and it kind of gives you some of the things that Vercel is doing for you, like the Git integration. Another one that I really like and that I probably would use instead of this deploy script is Kamal, which the team over at 37signals and the Rails team and DHH have put a lot of work into. It's basically like a productionized version of that deploy script that I have where you define it with some config, and it's gonna help you do the zero downtime deploys, all deployed on your VPS or your single server. And I think with Kamal too, you can now do multiple apps on one VPS as well. So that's worth checking out as well. Lastly, two quick shout outs I wanna give. One, thank you to Shane for the suggestion for image optimization on a separate box in IPX. Also, thank you to the IPX team for making it very easy to use. And secondly, thank you to Brandon from Flight Control who wrote up this really good post about self-hosting SGS and gave us some feedback and ways that we could improve the experience. Very happy to say that we've shipped improvements to some of the things outlined here in image optimization, in caching, in ISR control, and more coming soon to improve the self-hosting experience. So thank you, Brandon. Really appreciate your feedback here. And I think that covers everything. I know this was a pretty long video. So if you made it this far, thank you for making it this far. Feel free to leave a thumbs up or subscribe if you wanna see more content on Next.js. Hopefully this was pretty comprehensive and answered all your questions around self-hosting and how you can configure all of the pieces of Next.js to work on your own cloud infrastructure, your $4, $5, $6, $8 BPS, or whatever hardware you wanna use. That's all for now. Let me know in the comments what you'd like to see next. Peace.