Redukcja obrazu Dockera z 1.2GB do 10MB - praktyczne porady (film, 7m)
W najnowszym materiale opublikowanym przez kanał Better Stack, twórca dzieli się swoimi doświadczeniami w optymalizacji obrazów Docker, zmniejszając rozmiar swojego obrazu z 1,2 GB do zaledwie 10 MB. Opisana technika dotyczy każdej megabajta, co nie tylko obniża koszty przechowywania, ale także wpływa na czasy wdrażania, skalowalność i bezpieczeństwo, co jest szczególnie istotne w kontekście korzystania z narzędzi takich jak Kubernetes. W trakcie prezentacji, twórca korzysta z przykładu aplikacji Node React, jednak wskazówki te mają zastosowanie do wszystkich obrazów Docker.
Pierwszym kluczowym krokiem jest wybór odpowiedniego obrazu podstawowego. Zwykle obraz Node Latest waży ponad 1 GB, co jest przesadą, gdyż można skorzystać z wersji Alpine, która ma zaledwie 155 MB. Poprzez dodanie „-alpine” do podstawowego obrazu, oszczędzamy około 80% rozmiaru przy minimalnych, niezbędnych komponentach do uruchomienia aplikacji. Choć czasami mogą występować problemy z kompatybilnością, ogólnie rzecz biorąc, wersje minimalne są bardziej efektywne.
Dalej, wideo porusza temat zarządzania warstwami podczas kompilacji obrazu. Autor przedstawia, jak zoptymalizować proces kompilacji poprzez wykorzystanie pamięci podręcznej warstw - ważne jest, aby najpierw skopiować plik 'package.json' zamiast całej aplikacji. Umożliwia to ponowne wykorzystanie niezmienionych warstw w procesie budowy, co znacznie przyspiesza czas realizacji. W twórczości autora kluczowe jest również, aby zrozumieć jak działają warstwy Docker i dlaczego usuwanie plików w różnych poleceniach 'RUN' nie zmienia ostatecznego rozmiaru obrazu.
Na długość budowy obrazu i bezpieczeństwo może również wpływać to, co dodajemy do obrazu końcowego. Warto korzystać z pliku .dockerignore, aby wykluczyć nieistotne pliki, takie jak 'node_modules', co zmniejsza rozmiar kontekstu budowy. Autor podkreśla znaczenie łączenia operacji w jeden krok, aby uniknąć pozostawiania zbędnych plików w każdej warstwie.
Przeszło już na końcu autor zademonstrował, jak zastosowanie wieloetapowych budów (multistage builds) pozwala zaoszczędzić miejsce. W ostatnim etapie budowy, wykorzystaniu lekkiego obrazu Nginx do hostowania plików statycznych, pozwala na uzyskanie końcowego rozmiaru obrazu 57 MB. Ostatecznie autor poleca kilka narzędzi, takich jak Dive i Slim, które wspomagają optymalizację obrazów i pomagają w lepszym zarządzaniu kontenerami. Jak wskazano w chwili pisania tego artykułu, film na kanale Better Stack ma obecnie 285265 wyświetleń oraz 11520 polubień, co świadczy o dużym zainteresowaniu tej tematyki.
Toggle timeline summary
-
Wprowadzenie do redukcji rozmiaru obrazu Docker z 1,2 GB do 10 MB.
-
Dzielenie się niedawno poznanymi wskazówkami dla doświadczonych programistów.
-
Zarys najlepszych praktyk dotyczących Dockerfile.
-
Zrozumienie znaczenia rozmiaru obrazu i jego wpływu.
-
Użycie aplikacji Node React jako studium przypadku dla wskazówek.
-
Wybór lepszego obrazu bazowego, podkreślający ograniczenia 'Node Latest'.
-
Zalety użycia obrazów Alpine dla mniejszego rozmiaru.
-
Wyjaśnienie wydajności Alpine poprzez usunięcie niezbędnych elementów.
-
Demonstracja odpowiedniego buforowania warstw dla szybszych kompilacji.
-
Znaczenie restrukturyzacji Dockerfile dla buforowania zależności.
-
Wykorzystanie Docker ignore do pomijania niepotrzebnych plików.
-
Wyjaśnienie warstw i znaczenia zmniejszania warstw.
-
Wprowadzenie do budowy wieloetapowej dla optymalnego rozmiaru obrazu.
-
Jak skutecznie używać Nginx do serwowania plików statycznych.
-
Ostateczny rozmiar obrazu uzyskany przy użyciu budowy wieloetapowej.
-
Wprowadzenie narzędzi takich jak Dive i Slim do optymalizacji.
-
Zachęta do subskrypcji i angażowania się w feedback.
Transcription
I want to take you through the steps that I use to take my Docker image from 1.2GB to just 10MB. The last few tips I actually learned recently, despite having used Docker for years, so even if you're an experienced dev, stay tuned for those ones. We'll also be looking at some Dockerfile best practices along the way as well. But first, why should you care? Well, it's because every megabyte counts. It doesn't just impact storage costs, but it can also impact deployment times, scalability, and even security. This is all amplified if you use tools like Kubernetes as well. In this example, I'll be using a Node React application, but these tips apply to all Docker images. For the first tip, let's talk about your foundation. In our React application here, we started with Node Latest, which weighs in at over 1GB as an image. The same goes for most of these images that include everything, like Python, for example. It's like using a cargo ship to deliver a letter. It's way too much. So let's see how we can choose a better base image. For a lot of images, Node and Python included, I can simply append "-alpine". This image is only 155MB, and using it in our build reduces it down to just 250MB. So about 7 characters there, and we're already down 80% in size. But you might be wondering why that works. Alpine is purpose-built for containers. Essentially, Alpine stripped out everything except the bare essentials needed to run your applications. Now, it's not always the best choice. Since it does use different system libraries to achieve this than the standard Linux distributions, it can sometimes cause compatibility issues, especially when you're doing something that works with native modules. But with pretty much all popular images, you should be able to find a variant that is more minimal than the standard one. You can also use the full image for your development container, but then work out what's needed and then use a minimal one for production. A cool shout out here to distroless images. These are Google's take on minimal images. Distroless images contain no operating system at all. You get no shell, no package manager, not even basic Linux commands. It's just your application and its runtime. These are a little more complex to set up, though, so I'll stick to the Alpine variant. But I'll leave the links to that project in the description down below. So now that we've got our Alpine base image, let's talk about speed. Every time you change a single line of code, you shouldn't be waiting minutes for Docker to go ahead and rebuild and reinstall all of your dependencies. So let's fix that with some proper layer caching. So here, as an example, I made a small change in my React application. I changed a line of text, but it's rebuilding everything from scratch when I go ahead and run the Docker build command. This is because each instruction in your Docker file creates a new layer. The magic happens when Docker is able to reuse the layers that haven't changed from a previous build. So let's rewrite our Docker file to take advantage of layer caching. So in this file, the difference here is that we're copying just the package.json first. This is because our dependencies change less frequently than our code. Now on a rebuild, the dependency step can reuse the cache variant from its previous build and only the code build step needs redoing. The same would go if you use something like requirements.txt in Python as well. Just so you know, three things trigger a cache invalidation. One, changes to the file that you're copying. Two is changes to the Docker file instruction. And three is changes to any previous layer. This is why order matters. Put your most stable layers at the top and then your most frequently changing ones at the bottom. So we've optimized our base image and we're utilizing layer caching. So let's look at removing some unneeded files. The first tip is simply to use a Docker ignore. When I used npm install locally and I'm developing locally, then in my Docker file, I go ahead and copy everything over. It's carrying a lot of extra folders that aren't actually needed, like node modules, for example. You can actually see this in the build command where it tells you how much context was transferred over. This is actually entirely useless for building my application because if you can see, we're reinstalling the modules within the image. So we need to go ahead and make sure they aren't included. So let's set up a Docker ignore. I'll add in the files that I know aren't needed in the image. So that's things like node modules. And it's also good practice to make sure your secrets aren't going in there, too. Now, when we run the build command again, you can see we transferred way less build context. This could speed up your build significantly, especially on larger scale applications. Next, I want to explain a little bit about layers, as I think it's important to know before we move on to multistage builds. So let's talk about layer squashing. If we look into our container, we can actually see a few files that we don't want in there. With this application, if I wasn't using the preview, I wouldn't even need that node modules folder as my application has already been built into static files. We'll look at that in a bit. But for now, let's focus on what would happen if we wanted to clean up those extra folders. We can clean the cache, remove temp files, remove node modules. So we've removed some extra folders there. So we should have a smaller image, right? Oh, we don't. This is where we need to talk about how Docker layers actually work. The separate run commands here don't save additional space in the final image. In Docker, each layer is immutable and contains only the changes from the previous layer. When you use separate run commands like this, even though you're deleting files in the later layers, the files still exist in those earlier layers. Docker can't actually remove data from previous layers because they are immutable. The deletion in a new layer only marks the files as not accessible in the final container, but they still consume space in that image. If we moved all of these into one run command, though, all of these operations are now happening in a single layer. When that layer is committed, it only contains the final state with the cleaned up files, not any of that intermediate state with the extra files. This could even have security implications if you're copying over EMV files, which you shouldn't be doing, by the way. But if you did and then you removed it on another line, it would actually still be findable in the image. Someone could go in there and extract those secret files. So that's even more space saved. But now my app personally won't actually run in this state. As I mentioned earlier, I deleted the node modules folder, but the application relied on the preview to go ahead and host those static files that in my case are HTML files. So I now need to host them again. This is the number one Docker file tip, and that is multistage builds. When my application has been built, I actually only need the distribution files and then some way to serve what in my case is static HTML files. So I don't need anything else from Node, NPM or anything like that to actually go ahead and run the final container. Instead, then what I'll do is I'll add a line that says from and then I'll use Nginx to go ahead and host my files. The from keyword here separates this out into separate stages. The magic here is that everything that happens in that builder stage, so Node.js, NPM, Node modules, your source code, it all gets thrown away. The final image only contains your build assets that we can copy over from that stage and then the Nginx image itself, which is very small. So now I'm down to just 57 megabytes. Super easy, right? Now, I can understand that Docker files can get super complex. So I want to shout out some tools that can help you out with this process. First, we have Dive. This is an image explorer that will let you look at the individual layers that you've created. You can use this to help you find ways to optimize and debug your builds. Now, we also have Slim. Slim allows developers to inspect, optimize and debug that containers. You don't have to change anything in the container image itself, and you can minify it by up to 30 times while making it secure as well. Optimizing images isn't the only thing it can do, though. It can also help you understand and author better container images through tools like X-Ray and Linting. I highly recommend this tool. I actually used the slim build command on the improvements that we made here, and I got it down to the final 10 megabytes. There we go. If this video helped you out, do go ahead and subscribe and let me know your favorite tip in the comments down below. As always, see you in the next one.