Menu
O mnie Kontakt

Jeff Geerling w swoim najnowszym filmie poruszył temat budowy klastrów Raspberry Pi, co wzbudziło dużą ciekawość wśród jego widzów. Wiele osób zadało pytania, dlaczego warto zbudować klaster, zwłaszcza gdy istnieją droższe, bardziej wydajne komputerowe alternatywy. Jeff zaczął od wyjaśnienia, czym naprawdę jest klaster. Stwierdził, że nie jest to połączenie mocy obliczeniowej dwóch komputerów, ale raczej zespół podobnych lub nawet różnych komputerów, które współpracują, aby wykonać określone zadania. Kluczowe jest podzielenie pracy na członków klastra, co nie zawsze jest skuteczne w przypadku niektórych programów, jak np. gry.

Jeff opisał, jak on sam wykorzystuje swoje Raspberry Pi do różnych zadań w swojej domowej sieci. Na przykład, uruchamia Prometheus i Grafanę na jednym z Pi, monitorując jakość powietrza i zużycie energii. Inne służą do zarządzania kopią zapasową jego danych czy jako serwer internetowy. Podkreślił, że wiele aplikacji działa świetnie na procesorach ARM, co czyni Raspberry Pi atrakcyjnym wyborem dla wielu użytkowników. Zamiast budować klasyczny komputer, Jeff twierdzi, że można uzyskać ciekawe i efektywne rozwiązania, korzystając z tańszych, ale wydajnych Pi.

Przechodząc do ogólnych zalet klastrów, Jeff zwrócił uwagę na dwie główne kwestie: czas pracy i skalowalność. Dzięki klasteryzacji, można zwiększyć dostępność różnych usług, co jest istotne w przypadku awarii jednego z serwerów. Klaster pozwala na kontynuowanie pracy, nawet gdy część komponentów jest uszkodzona. Jeff sam planuje zbudować klaster z użyciem Kubernetes, co jeszcze bardziej zwiększy jego redundancję. Dzięki temu, w przypadku awarii, wszystko może być szybko naprawione i przywrócone do normalnego stanu.

Warto również zastanowić się nad oszczędnościami i efektywnością. Jeff porównał ceny klastra Raspberry Pi z droższymi rozwiązaniami takimi jak procesor AMD EPYC. Choć klaster Raspberry Pi może nie mieć takiej samej mocy obliczeniowej jak drogie rozwiązania, jego niski pobór mocy oraz koszt budowy sprawiają, że dla wielu zastosowań może być opłacalnym wyborem. Jeff zauważył, że w przypadku niższego zapotrzebowania na moc obliczeniową, kluster Pi może okazać się bardziej ekonomiczny.

Na koniec Jeff zachęca swoich widzów do dzielenia się własnymi doświadczeniami z tworzenia klastrów, co pokazuje, jak różnorodne mogą być zastosowania Raspberry Pi. Z 580612 wyświetleniami i 18455 polubieniami w czasie pisania tego artykułu, film ten z pewnością odzwierciedla rosnące zainteresowanie tematem klastrów Raspberry Pi. Jeff Geerling pokazuje, że istnieje wiele możliwości w świecie technologii, a klastery mogą być fascynującym i edukacyjnym projektem do eksploracji dla wielu entuzjastów IT.

Toggle timeline summary

  • 00:00 Wprowadzenie do wideo o serwerze typu blade Raspberry Pi i pytania dotyczące klastrów Pi.
  • 00:21 Podkreślenie, że nie wszyscy powinni budować klastrów Pi, ale niektórzy powinni.
  • 00:27 Wyjaśnienie, czym nie jest klaster Pi, wyjaśniając nieporozumienia.
  • 01:07 Klaster składa się z podobnych lub różnych komputerów zarządzanych w celu wykonywania zadań.
  • 01:21 Niektóre oprogramowanie dobrze się paralelizuje, podczas gdy inne, jak gry, nie.
  • 01:40 Dyskusja na temat oprogramowania, które działa efektywnie na klastrze, jak Prometheus i Grafana.
  • 02:09 Przykłady zastosowań praktycznych działających na Raspberry Pi.
  • 02:52 Powody budowy klastrów, w tym dostępność i skalowalność.
  • 03:56 Omówienie problemów związanych z dostępnością i niezawodnością pojedynczych komputerów.
  • 04:32 Korzyści z klastrowania w zarządzaniu awariami serwerów.
  • 05:09 Główne powody klastrowania zamiast pojedynczych maszyn.
  • 05:32 Debata na temat skuteczności klastra Pi w porównaniu z wysokiej klasy CPU.
  • 06:58 Osobiste spostrzeżenia i wartość edukacyjna budowania klastrów Pi.
  • 08:07 Przykłady zastosowań przedsiębiorstw dla klastrów Raspberry Pi.
  • 08:50 Dyskusja na temat pamięci ECC w Raspberry Pi.
  • 09:45 Podsumowanie myśli na temat używania klastrów Raspberry Pi i zaangażowania widzów.

Transcription

After I posted my Raspberry Pi blade server video last week, lots of people asked what you'd do with a Pi cluster. Many asked out of curiosity, while others seemed to shudder at the very idea of a Pi cluster because obviously a cheap PC would perform better, right? Before we go any further, I'd say probably 90% of you watching shouldn't build a Pi cluster. But some of you should. Why? Well, the first thing I have to clear up is what a Pi cluster isn't. Some people think when you put together two computers in a cluster, let's say both of them having 4 CPU cores and 8 gigs of RAM, you end up with the ability to use 8 CPU cores and 16 gigs of RAM. Well, that's not really the case. You still have two separate 4-core CPUs and two separately addressable 8-gig portions of RAM. Storage can sometimes be aggregated in a cluster to a degree, but even there you suffer a performance penalty and the complexity is much higher over just having one server with a lot more hard drives. So that's not what a cluster is. Instead, a cluster is a group of similar computers, or even in some cases wildly different computers, that can be coordinated through some sort of cluster management to perform certain tasks. The key here is that tasks must be split up to work on members of the cluster. Some software will work well in parallel, but there's other software like games that can only address one GPU and one CPU at a time. Throwing Flight Simulator at a giant cluster of computers isn't going to make it run any faster. Software like that simply won't run on any Pi cluster, no matter how big. Luckily, there is a lot of software that does run well in smaller chunks in parallel. For example, right now in my little home cluster, which I'm still building out, I'm running Prometheus and Grafana on this first Raspberry Pi and monitoring my internet connection, indoor air quality, and household power consumption. This Pi is also running PiHole for custom DNS and to prevent ad tracking on my home network. This next Pi is running Docker and serving up the website Pidramble.com, and the one after that is managing backups for my entire digital life, backing up all my data off-site on Amazon Glacier. I also have another set of Pis that typically runs Kubernetes, but I'm rebuilding that cluster right now. But there's tons of other software that runs great on Pis. Pretty much any application that can be compiled for ARM processors will run on the Pi. And that includes most things you'd run on servers these days, thanks to Apple's adopting ARM with the new M1 Macs and Amazon using Graviton instances in their cloud. I'm considering hosting Nextcloud and Bitwarden soon to help reduce my independence on cloud services and for better password management. A lot of people run things like Home Assistant on Pis, and there are thousands of different Pi-based automation solutions for home and industry. But before we get to specifically why some people build Pi-clusters, let's first talk about clusters in general. Why would anyone want to build a cluster of any type of computer? I already mentioned that you don't just get to lump together all the resources. A cluster with 10 AMD CPUs and 10 RTX 3080s can't magically play Crysis at 8K at 500 FPS. Well, there are actually a number of reasons, but the two I'm usually concerned with are uptime and scalability. For software other than games, you can usually design it so it scales up and down by splitting up tasks into one or more application instances. Take a web server, for instance. If you have one web server, you can scale it up until you can't fit more RAM in the computer or a faster CPU. But if you can run multiple copies, you could have one, ten, or a hundred workers running that handle requests, and each worker could take as much or as little resources as it needs. So you could, in fact, get the performance of 10 AMD CPUs split up across 10 computers, but in aggregate. Not everything scales that easily, but even so, another common reason for clustering is uptime or reliability. Computers die. There are two types of people in the world. People who have had a computer die on them, and people who will have a computer die on them. And not just complete failure. Computers sometimes do weird things, like the disk access gets slow, or it starts erroring out a couple times a day. Or the network goes from a gigabit to a hundred megabits for seemingly no reason. If you have just one computer, you're putting all your eggs in one basket. In the clustering world, we call these servers snowflakes. They're precious to you, unique, and irreplaceable. You might even name them. But the problem is, all computers need to be replaced someday. And life is a lot less stressful if you can lose one, two, or even ten servers while your applications still run happy as can be because you're running them on a cluster. Now I mentioned that I'm running one instance of each of these applications on this cluster right now. I'm planning on splitting up a few of them, though, and probably using Kubernetes on the entire stack so I can have even better redundancy. But having multiple Pis, and having good backups and automation to manage them, means when a micro SD card fails or a Pi blows up, I toss it out and can have a spare running in a few minutes. Okay, so those aren't all the reasons for clustering, but two of the main reasons most people would consider a cluster over one computer. But that doesn't answer the question why someone would run Raspberry Pis in their cluster. A lot of people questioned whether a 64-core ARM cluster built with Raspberry Pis could compete with a single 64-core AMD CPU. And well, that's not a simple question. First I have to ask, what are you comparing? If we're talking about price, are we talking about 64-core AMD CPUs that alone cost $6,000? Because that's certainly more expensive than buying 16 Raspberry Pis with all the associated hardware for around $3,000 all in. If we're talking about power efficiency, that's even more tricky. Are we talking about idle power consumption? Assuming the worst case with PoE-plus-powered HPi, 16 Pis would total about 100 watts of power consumption all in. According to ServeTheHome's testing, the AMD EPYC 7742 uses a minimum of 120 watts, and that's just the CPU. If you're talking about something like crypto mining, 3D rendering, or some other test that's going to try to use as much CPU and GPU power as possible constantly, that's an entirely different game. The Pis' performance per watt is okay, but it's no match for a 64-core AMD EPYC running full blast. Total energy consumption would be higher, 400-plus watts compared to 200 watts for the entire Pi cluster full-tilt, but you'll get a lot more work out of that EPYC chip on a per-watt basis, meaning you could compute more things faster. But there are a lot of applications in the world that don't need full-throttle 24-7. And for those applications, unless you need frequent bursty performance, it could be more cost-effective to run on lower-power CPUs like the ones in the Pi. But a lot of people get hung up on performance. It's not the be-all and end-all of computing. I've built at least five versions of my Pi cluster. I've learned a lot. I've learned about Linux networking. I've learned about power over Ethernet. I've learned about the physical layer of the network. I've learned how to compile software. I've learned how to use Ansible for bare-metal configuration and network management. These are things that I may have learned to some degree from other activities or by building virtual machines on one bigger computer, but I wouldn't know them intimately. And I wouldn't have had as much fun, since building physical computers is so hands-on. So for many people, myself included, I do it mostly for the educational value. Even still, some people say it's more economical to build a cluster of old laptops or PCs you may have laying around. Well, I don't have any laying around, and even if I did, unless you have pretty new PCs, the performance per watt from a Pi 4 is actually pretty competitive with a 5-10-year-old PC, and they take up a lot less space. And besides, the Pis typically run silent, or nearly so, and don't act like a space eater all day like a pile of older Intel laptops. But there's one other class of users that might surprise you, enterprise. Some people need ARM servers to integrate into their continuous integration CI or testing system so they can build and test software on ARM processors. It's a lot cheaper to do it on a Pi than on a Mac Mini or an expensive Ampere computer if you don't need the raw performance. And some enterprises need an on-premise ARM cluster to run things like they would on AWS Graviton or to test things out for industrial automation where there are tons of Pis and other ARM processors in use. Finally, some companies integrate Pis into larger clusters as small, low-power ARM nodes to run software that doesn't need bleeding edge performance or needs to be isolated from other servers. Another sentiment I see a lot is that it's too bad the Pi doesn't have ECC RAM. Well, be ready to be shocked, because the Pi technically does have ECC RAM. Check the product brief. The Micron LPDDR4 RAM the Pi uses technically has on-die ECC. Now, when people say ECC, they mean a lot of different things. And I'd say half the people who complain about a lack of it couldn't explain specifically how it would help their application run better. But it is good in a server setting for a lot of different types of software, and the Pi has it. Or does it? Well, not in the sense that expensive high-end servers do. The on-die ECC can prevent memory access errors in the RAM itself, but it doesn't seem to be integrated with the Pi's system on a chip, so the error correction is minimal compared to what you'd get if you spent tons of money on a beefy server with ECC integrated through the whole system. So anyways, those are my thoughts on what you could do with a cluster of Raspberry Pis. What are some other things you've seen people do with them? And have you built your own cluster of computers before, Raspberry Pi, or anything else? I'd love to see your examples in the comments. Until next time, I'm Jeff Geerling. Before we go any further, I should say blah blah blah. That's definitely something I should say. Throwing fights, fight, fight simulator. Some software will work well in parallel. When you talk about fun and you're not having fun, that's just insane. Hands on. Or by building virtual machines. One one bigger computer. But there's one other class of users. The wasberries taste like wasberries.