Menu
O mnie Kontakt

Po prezentacji sprzętu w moim domowym laboratorium, wielu z Was pytało, co takiego uruchamiam w swoim laboratorium. Cóż, to obejmuje różnorodne elementy - od pulpitów nawigacyjnych, przez hipernadzorcze, NAS y, DNS, zarządzanie siecią, Docker i Kubernetes, GitOps, CI i CD, aż po wiele proxy odwrotnych, monitorowanie i wizualizację danych, automatyzację domu i wiele więcej. Tym razem postanowiłem dołączyć diagramy sieciowe oraz diagramy logiczne, aby pokazać, gdzie znajduje się wszystko, co się tutaj dzieje, co wymagało nieco dłuższego czasu przygotowania. Jeśli chcecie zobaczyć więcej na temat tego diagramu, zamieszczę linki poniżej w opisie. Przyjrzyjmy się wszystkim usługom, które samodzielnie hostuję w moim domowym laboratorium. Na początek pokażę, jak jest zorganizowana moja sieć. Przed oskarżaniem mnie, że korzystam z darmowych rzeczy od Ubiquiti, wszystkie wymienione urządzenia zakupiłem, z wyjątkiem jednego, który zastąpił istniejący 48-portowy przełącznik. Moja sieć zaczyna się od UDM SE, który przyjmuje internet, jest moim zaporą i bramą, zarządza wszystkimi VLAN-ami. Następnie podłączony jest do mojego przełącznika agregacyjnego. W studio znajdują się przełączniki PoE, które nie tylko uplinkają inne urządzenia, ale także zarządzają VLAN-ami, co jest kluczowe dla mojego domowego laboratorium. Mówiąc o fizycznej konfiguracji, nie mogę pominąć podziału na VLAN-y, które logicznie grupują moje urządzenia ze względu na ich rolę.

Toggle timeline summary

  • 00:00 Wprowadzenie do konfiguracji domowego laboratorium i sprzętu.
  • 00:04 Przegląd różnych komponentów działających w domowym laboratorium.
  • 00:09 Wzmianka o Dockerze, Kubernetesie, GitOps i CI/CD.
  • 00:17 Omówienie diagramów sieciowych i układów logicznych.
  • 00:42 Szczegółowy opis układu sieci i sprzętu.
  • 00:51 Wyjaśnienie dotyczące zakupu sprzętu, w tym przełącznika.
  • 01:08 Wprowadzenie do routera UDM-SE i zarządzania VLAN.
  • 01:42 Przegląd różnych przełączników i ich połączeń.
  • 02:44 Wyjaśnienie VLAN-ów, w tym dla gości, głównych i IoT.
  • 03:00 Prezentacja diagramu sieciowego z grupowaniem urządzeń.
  • 03:22 Opis połączenia internetowego i konfiguracji zapory.
  • 04:42 Spostrzeżenia na temat organizacji VLAN dla urządzeń IoT i zabezpieczeń.
  • 05:14 Omówienie zaufanych i niezaufanych sieci dla urządzeń.
  • 07:58 Szczegóły dotyczące zaufanych serwerów, usług i kontroli możliwości.
  • 10:06 Przegląd niezaufanych serwerów obsługujących publiczne obciążenia.
  • 11:53 Wprowadzenie do usług dashboardowych, takich jak Heimdall i Proxmox.
  • 14:35 Szczegóły na temat TrueNAS Scale i jego wydajności.
  • 16:56 Użycie PiHole do zarządzania DNS i strategii redundancji.
  • 19:03 Zarządzanie kontenerami z użyciem Dockera i Kubernetesa.
  • 24:36 Spostrzeżenia na temat praktyk GitOps w zarządzaniu infrastrukturą.
  • 26:41 Użycie Traefik do reverse proxy i zarządzania certyfikatami.
  • 28:56 Dyskusja na temat automatyzacji domu z użyciem Home Assistant.
  • 31:20 Wykorzystanie skryptów do integracji kamer w inteligentnych domach.
  • 32:30 Podsumowanie uproszczenia usług i ich przyszłości.
  • 39:52 Refleksje na temat podróży związanej z samodzielnym hostowaniem i przyszłych aspiracji.

Transcription

After giving you a tour of the hardware in my home lab, many of you asked, what are you running in your home lab? Well, it ranges from dashboards, to hypervisors, to a NAS, to DNS, to network management, to Docker and Kubernetes, to GitOps, to CI and CD, to multiple reverse proxies, monitoring and data visualization, home automation, and much, more. And this time, I've decided to include network diagrams, as well as logical diagrams for where everything lives, to help you understand how this all comes together. That's why I was a little bit late getting it out. And if you want to see more about this diagram, I'll have links below in the description where you can find it. So let's take a look at everything that I'm self-hosting in my home lab. First, let me show you how my network is laid out. Now, before you go accusing me of just taking free stuff from Ubiquiti, I purchased all of these things you see on here with the exception of one thing. And it was this one switch, which replaced an existing 48 port switch, that out of the way. But this is my UDMSC, the internet comes into here. This is my firewall in my gateway, and it manages all of my VLANs. Then connected to this is my switch aggregation pro, which you saw in the hardware tour. And then from there, I have the 48 port switch uplink to there, as well as this 24 port PoE switch that's in here in the studio uplink to there. This one's uplinked at 20 gig, really, it's a lag of two 10 gigs. And this one's uplinked at 10 gig. And then here in my studio, I have a 16 port PoE switch, which is connected back there. And then a flex mini switch that's also connected to that TV back there to provide some VLANs. And then I have this USW Flex XG, which has a couple of 10 gig ports, and it's powered by PoE. That's on my workbench back there, you can kind of see it just for testing, I was going to use that for 10 gig devices in here. But I ended up putting this switch right here. And then my 48 port switch that's in my rack, I have a couple of access points connected to it, some cameras are connected to that some additional cameras, some other mini switches around the home. And one of those mini switches you can see on the wall down there in my server room, and another access point here, which is connected to my doorbell. And then if you see this access point here, it's actually a wireless bridge to my garage that has another access point there. And then from there, a switch connects to there, and then some cameras and some other devices in there. Now, I'm not showing all of my devices here on the unified devices. If I showed all of my devices, it would be kind of hectic to look at. And as you can see, that's a lot of devices. So I figured I'd just show you the network backbone or just the network devices that are connected. But this is the physical layout and not the logical layout. If you want to see my VLANs, you can see right here. So a default guest cameras, main IoT, server trusted and server untrusted, and then travel. That's one I was testing with. And these are the VLANs that I use to logically group all of these devices together based on their role or their needs. And here's a network diagram of all of my devices in which VLANs they fall into. So they're logically grouped according to the VLAN and according to their role. This is why this video is a little bit late. But as you can see, I have the same VLANs that we had earlier, and then the devices that fall into this group. Now, let's start with how things come into my network. And that's the internet. So I'm connected to the internet with my modem here. And then from here, I firewall and have my UDM-SE right here. So this is my UDS-ME, my router per se. And I have VLANs and I have firewalls in between each VLAN. And if we look at one of these VLANs, this default network right here, so this default network is no VLAN at all. And this is where I keep a lot of my network equipment, as well as my Proxmox servers. Now, you might be asking, why do you keep your Proxmox server on a network that has all packets tagged and untagged? It's because they need to listen for those tags or those VLANs so that they can use them for their virtual machines. So I have four Proxmox servers that we'll talk about here in a little bit. And they sit on this default network. And then I have virtual machines that attach to different VLANs. So that's kind of boring. But if we look here, I also have a camera VLAN. And in my camera VLAN, I have wired and wireless security cameras on their own VLAN. Now, you might be asking, well, why do you want to put them on their own VLAN? Really, it's to segment traffic. And it's also to limit the internet to these devices. That's something that I chose to do. So these are firewalled from the rest of my networks. No one can communicate with them. And they're on a network that doesn't allow any internet outgoing. So it's a pretty secure VLAN. Next, you can see my IoT VLAN. And this VLAN is really for devices that I don't have control over. I can't really remote into them. I may or may not be able to patch the firmware or give them software updates or even see if they're up to date. And so I put things that other vendors control into this VLAN so that if something ever happened, they can only communicate with these devices. So my printer, it's on the IoT VLAN. Kubernetes node, we'll talk about that here in a little bit. But this Kubernetes node is here so that it can communicate with all of these devices. And it does live on that VLAN. Even though I trust this server and meet some of the criteria for trusted, I still decided to put it here rather than create a lot of firewall rules. It was just easier. Then I have lots of IoT devices. And that's a lot of the devices you probably see in here. That includes TVs, smart lights, other lights, switches, things like that. So they're all in this IoT VLAN. My wife's work laptop. Yeah, her laptop is in there. That's because it's managed by another company. I don't know what that company is doing. And I don't have a way to patch it myself and get into it and take care of it if something goes wrong. So I'm considering her laptop just an IoT device. Arguably, I could put it on the guest network. But that network's not up all the time. And then testing devices. Anytime I'm testing a new device on my workbench back there, I'd be sure to plug it in the IoT network so that it only really has access to the internet and not my main VLAN. Next is my main network. And this is really where my trusted PCs and client devices kind of hang out. So I have PCs, I have Macs, I've also decided to put phones and tablets, Apple TVs and HomePods here. Now, that was a choice I had to make and a choice that you might have to make sometime to remember when I mentioned back here, I didn't want to create a whole bunch of firewall rules so that devices can communicate with each other. Well, originally, I had all of my phones and tablets and HomePods over in the IoT side, which made it easy to communicate with different devices on the IoT network, but not so easy to communicate with my PCs and everything else that's on my main network, I decided that I do trust Apple devices enough to put them on my main network, my guest network, this really isn't used as I was talking about earlier, it's there, but I don't advertise this SSID unless people come over and visit and bring devices, which is pretty much never. I'm being honest, most people don't ask, can I use your Wi Fi when they come over. And that also assumes that people come to my house. But anyways, I have this here, it's here if it's needed. And this is where guests devices will connect if they come and visit. And one thing that I did configure on this network, really for fun, but also for security was to isolate these devices and see that on my firewall rule right here. So these devices can only get out to the internet and they can't communicate with other devices because they're isolated servers untrusted, it might be easier to talk about servers trusted first. So let's hop over here servers trusted as I kind of hinted at earlier are servers or devices that I trust. They're devices that I can remote into, I can patch, I can replace, I can wipe if I need to. And on top of that, they're servers. So they're performing some kind of service on the network, you can see if I trusted Kubernetes cluster, and this is providing services that I trust internally only, I have my DNS servers, yes, I do have three DNS servers, we'll talk about that here in a little bit. But they provide DNS to my network, my NAS, which supplies storage services, my Pi KVM, I decided to put this on this trusted network as well, because I can patch it, I can remote into it, it's running Arch Linux. And it fits all of the criteria that I set up to say that whether it's trusted or not, Windows VM that's doing one or two tasks, that's there too. And then IPMI and UPS. Now, this is kind of debatable, but it's decided to put my IPMI devices on my trusted network, rather than on my IoT network. And again, this is a decision that you're going to have to make. The reason I put them here is because I didn't really feel comfortable putting them on the same network as all of these other IoT devices, like switches and stuff like that. I felt like if one of these devices had a breach, and they were able to get to this, that was probably more likely than my IPMI, which isn't even public facing, having some kind of security breach, which they can't even remote into anyways, and then breaking out and getting into my trusted servers. Same with my UPS, these aren't public facing at all. And so I decided to put them on my servers trusted, rather than on this IoT network, even though technically, they are IoT devices. And again, these are trade offs, you're going to have to make when you segment your network, if you haven't already. And lastly, you see this middle IB IP, this is an IP address that is a load balancer. That is how machines or clients get into this trusted network. And again, this is only internal. So I have an IP address that I can communicate with and get to the websites and stuff that's running in my trusted network. And now for my untrusted, since you understand what my trusted is, maybe untrusted might be a little more clear, or a little more confusing. So my untrusted network isn't really devices that I don't trust. It's that I don't trust the network itself, because I host things publicly here. So you know, my website, my blog, my wiki, some of my bots, and some of my web hooks that are all public facing, they all live in here. And so security is pretty tight here in my servers untrusted VLAN. There's not a lot that it can do other than communicate with other devices on here. And so this is where public workloads run. And I have metal be running here to to create a load balancer and IP address that people from the public can get to. So if they come in from the public, they're actually first hitting Cloudflare, which I use for my reverse proxy that will filter out any, you know, bots or anything like that, or any bad actors, for the most part, that will come in through Cloudflare, it'll come into the internet, it'll come into my UDMSC, that will forward the traffic to my untrusted network to this metal lb IP. And then that will forward the traffic down to my servers trusted depending on the workload that it needs to get to. So again, these are all servers, technically that still fit the criteria of trusted, you know, I can remote into them, I can patch them, I can wipe them, I can do anything I want to them. But they're living on a network that allows public incoming traffic. And so I decided to segment that in its own VLAN, I probably should call it public VLAN rather than servers untrusted. But I was already on this whole trusted and untrusted and servers kind of scheme. So I went with that. So this is logically how my networks laid out. But now let's dive into some of these services that are running within these different VLANs. So first, let's start with dashboard, it is still Heimdall. Heimdall is a great product. I think I said last year, I might switch off of it. I think next year, I might too, just as some complications that it has with a storage mount and a volume and reading and writing data. And that has gotten corrupt for me once or twice. And I've had to rebuild that volume or rebuild my dashboard. And I want to switch to something that's more config driven. So I can feed into config and not worry about mounting a volume to it. But anyways, long story short, it's still great, still looks good. It still works, it does exactly what I need it to. But it might be checking out some more in the future. Next, let's start on hypervisors. hypervisors is a big one for me, I'm still using Proxmox. And I love Proxmox, I can't imagine using anything else at this point. And as you can see, I have four nodes here. So I do have four nodes. And they are in a cluster. And you're probably wondering about quorum since I haven't even amounted devices. But I've made some adjustments so that I can have quorum even with four. Anyways, not important. I do run this in a cluster, but I don't have high availability virtual machines. Instead of having high availability virtual machines, I build high availability into my services when I can. And so I have high availability services rather than virtual machines. Just doing it one step lower. But anyways, Stornader. So that's the big Stornader device you saw on the rack. This does have some virtual machines. The biggest virtual machine it used to have was Andromeda, which was my virtualized TrueNAS. But I've moved from the virtualized TrueNAS to a physical TrueNAS with the HL 15. We'll talk about that here in a little bit. So the things that Stornader needs to do is kind of dwindling a little bit. We'll talk about its roles here in a little bit. But I have a few virtual machines on for testing. And then you can see I have, let me see, one, two, three. So this is Shing 1, 2, and 3. So I've migrated all of my services to these Intel NUCs, and they're running here. And so I have 1, 2, 3, 4, 5, 6 virtual machines. You can see here, they're all running fine. You can see the CPU usage is pretty good. 8 CPUs or 8 logical cores on each of these. IO delay is pretty low. RAM, 64 gigs. It's used and it has a lot of KSM sharing. So a lot of this RAM is being shared, 15 gigs worth. But the swap is still pretty low, 1 megabyte of swap. So only 1 megabyte is being used for swap, which means it's really out of RAM. But these Intel NUCs are a beast. They're running 5, 6 virtual machines each, and they are rock solid. And they're all attached to my NAS for backups, as you can see here. So since we're talking about my NAS, let's go into my NAS, which is TrueNAS Scale. So I'm still on TrueNAS. I've been using TrueNAS since it was called FreeNAS, and then it was TrueNAS, and then TrueNAS Core, I think, was in there. And then I migrated to TrueNAS Scale. I ended up going with TrueNAS Scale because it's a little bit easier for me to manage a Linux box, or this is Debian Linux, than it is a FreeBSD. And so this NAS is running on the HL15, and it now is bare metal, which is kind of nice. I've been virtualizing my NAS for about 3 or 4 years, and now I'm back to bare metal. It's kind of nice having it bare metal because now when I reboot my hypervisor, I don't have to worry about if my NAS is down. But anyways, this is running on the HL15. I did add some additional RAM, so I have 128 gigs of RAM. I did extend my ARC, or my RAM, so it uses more than 50%. Thanks, Tom Lawrence. He has a video on that. You should check it out if you need to use more than 50% of your RAM. It does have, what, 6 cores. They're all low clock speed, but this is perfect for NAS because it uses very little power, all things considered for a NAS that has about 10 drives in it. But my NAS provides a lot of storage services, is what I call it, but it provides SMB, so Windows shares, NFS shares, and also iSCSI targets because I need some iSCSI drives for some of my machines. And one additional thing that it's also doing is running Minio on top of it. That's the only app that I use, and that gives me object storage so that I can do my backups, and I can use object storage within my network instead of using S3 or anything like that. The nice thing about running it on here is then it's on top of ZFS, and then I can do snapshots, and it's not inside of my virtual machine. It's not inside of my cluster. It's outside of my virtual machines and outside of my cluster. So all the things I need to backup that require S3 or object storage within my cluster backup to this, which is outside of my cluster. But it's pretty nice if you want to self-host object storage. And as you can see here, I have 1, 2, 3, 4, 5, so 10 drives. It's mirrored VDEVs, and I have five pairs of mirrored VDEVs, all 14 terabytes a piece. I've done a ton of optimization stuff, and I should probably create a TrueNAS video at some point talking about all of those optimizations in one. If you're interested in that, let me know in the comments below. But this is my NAS, rock solid. Couldn't ask for a better NAS. So since we're talking about some core services on my network, another one is DNS, and I'm still using PiHole. Yes, I do have three instances of PiHole. So I have two instances of PiHole that are on virtual machines within those Intel NUCs, and then I have one more PiHole instance that's running on that Pi Zero that you saw on the wall in my server room. The reason I do that is so that if my Intel NUCs are down, the Pi Zero that's running PiHole on the wall will still be serving DNS for the rest of my network. This means I can take my cluster all the way down without having to worry about DNS and my wife asking if the internet's down. Hey, is the internet down? And you're probably wondering, whoa, how do you do that with three DNS? Primary, secondary, and tertiary? Something like that. The way that I do that is I use KeepaliveD, and I create a load balancer between the second and third DNS with its own IP. So that means I technically have three DNS servers, but I only hand out IPs of the primary and the secondary, and the secondary is a load balance between the two. Anyway, it's kind of complicated. I have a video on it if you want to see, but PiHole is what I still use. Is it the best thing out there? No, obviously not, but it is great for what it does. And then I use GravitySync on top of that to synchronize all of my lists and all of my DNS entries across all three devices. And I do use it pretty heavily for local DNS as well. For network management, I'm obviously using the unified network controller because I have a lot of ubiquity devices. And I choose this because I have the hardware, it's a single pane of glass to manage all of my network, all of my firmware, and even my cameras. And on top of that, their mobile apps are nice. And as you can see, the UI is pretty nice here too. And speaking of cameras, I'm going to blur some of these out. And speaking of cameras, I use unified protect for my home security, keeps all of my security footage on this device, have about 10 cameras you can see down here. And it's a 14 terabyte drive. And that will get me 64 days of continuous recording. Not to mention this is a super low power device that is also acting as my firewall. Next up is containerization. And this includes two parts. There's docker-only hosts, and then there's Kubernetes. And as far as docker-only hosts, I run portainer to manage that. One migration I did finish up this year was moving all of my docker-only hosts to Kubernetes. So if we look in portainer, which I use to manage my docker-only hosts, we can see that I have one stack and that one stack is watchtower. And that's just there for testing. And that's kind of how I use docker right now. I use docker to spin it up, to test it out, see if I like it, run it for a couple of days. And if I like what it is, I'll move it over to my Kubernetes cluster. And speaking of my Kubernetes cluster, I have three Kubernetes clusters, and I manage them all with Rancher. I have my local cluster, which is K3S, and it's only running Rancher. Then I have my cluster 01, which is kind of my untrusted cluster. It's a little bit of both right now because I'm in the middle of a migration, but it runs all of my public workloads along with a few internal workloads too. But that's where cluster 02 comes in. And this is meant to run all of my private or my internal services and nothing in it will be public facing. And so with Rancher, I manage all three of these clusters. And you might be wondering, well, why do you need Rancher? Technically, I don't need Rancher. I'm very familiar and very comfortable with the CLI or using any kind of tools like Lens or anything like that. But I found having Rancher installed in my cluster gives me web management, gives me a lot of discoverability, and it gives me an easy way to debug something when something goes wrong. That being said, I don't deploy my workloads with Rancher, and I don't manage my workloads with Rancher. I do that with GitOps, which we'll talk about here in a little bit. But I run a ton of workloads in Kubernetes. As you can see, I have 85 pods running in the username space. So those are all pods that I put there. A lot of stuff going on in here that we'll talk about here in a little bit. But I do use Kubernetes pretty heavily. But if you want to dive into the nodes that are running in this cluster, I have 1, 2, 3, 4, 5 nodes that are workers. So these are running workloads. And then I have 1, 2, 3 that are running control plane and etcd. So what does that mean? Well, five of them are running all of the workloads that I put on there. And then three of them are running Kubernetes itself, running the control plane to communicate with the entire cluster, and running etcd, which is the internal database for it to store all of the Kubernetes information and metadata about the cluster. And you can see I have some labels on these nodes. So I have a label for network. And these three nodes here are running in the untrusted VLAN. And then these last two nodes are running. One is in service trusted, that's the one that was on the diagram. And the other one is running in IoT. And I mentioned earlier, it's running in the IoT VLAN, because it's easier to communicate with those IoT devices. Well, that's where Home Assistant and a few other containers that I have running are living on that node, which lives in the IoT VLAN. And so this label is a way that I could say that, hey, only put Home Assistant on this node, or basically put any workload on this node that has this label. And the same goes for this node right here, I have a few workloads that are internal, that are trusted, but slowly they're being migrated to my trusted cluster. So for completeness, let's look at my cluster. Oh, this is my trusted cluster. And I have three nodes there. And you can see it has all roles. So I decided to do that. So all three nodes are running all roles, plus they're a worker to taking on workloads, going to separate that out of here in a little bit as soon as I get some more resources. And the same goes for my local cluster to three nodes, all roles, plus running workloads. And this is a majority of what's running on those Intel NUCs. Now GitOps, this is a big topic. And the idea behind GitOps is that you treat your infrastructure as code. So what does that mean? That means every piece of service and infrastructure as far as load balancers, and possibly even nodes, you define in code, and you commit them to a Git repo, and you push them up. And then you let CI take over and deploy those resources to your cluster. So then your Git repo becomes the source of truth, and not the cluster itself, or the UI that you're looking at. And I follow this practice pretty exclusively. And so there isn't a lot to show you other than some YAML. But for example, you can see this is my deployment for uptime Kuma, I have a lot of property set on here. But at the end of the day, when I commit and deploy this, this is going to tell my cluster what the desired state is. And then the cluster will say, okay, well, I'll make it that way. So for example, if I wanted to give it a little more CPU, I could change this value right here, commit that deploy it. And then within a minute or so my cluster would see this come in, and it would apply it and then it would redeploy this container with the new request value. And so what does that for me? Well, it's Flux. I have Flux installed in my cluster. And it helps orchestrate all of this stuff within Kubernetes. Worth mentioning too, that I'm using Renovate or the Renovate bot. So the Renovate bot will look at my repo, see if there are updates for some of the containers that I'm using. If there are, it'll open a pull request or a merge request. And then all I have to do is approve or merge in that pull request. And then that gets deployed. So we can see that here, if we look at the uptime Kuma image, you can see right now I'm using 123.11 Alpine, but you can see there was a pull request yesterday to update this to .11. And so previously it was .10. I got a pull request from a bot that I created that runs Renovate bot. It wanted to merge in .11. I accepted that merge, and then that got deployed. So these two work really well together to help keep your cluster up to date, and then keep it functioning. And if you're curious about GitOps with Flux or Renovate bot, I have videos on both of those. Next is reverse proxy, and I'm still using traffic. Now I have three instances of traffic now running, two inside of one cluster, and one inside another cluster. And so technically, this diagram isn't right. I actually have two of these. I have two of these Metal LB IPs right here. But I guess this diagram is kind of my desired state, because I'm moving all of my trusted workloads or my internal workloads over here. But I'm using traffic to do that. And so it's a reverse proxy. I can use it as an ingress controller to create ingress objects within Kubernetes. I know that sounds super complicated. But at the end of the day, it's a reverse proxy, and it will take requests, look at those requests, and route them accordingly. So when someone requests access to my documentation site, technotim.live, first they hit my external reverse proxy, which is Cloudflare, it's worth mentioning, they'll hit that external reverse proxy that will come in, that will come into my router, my UDMSC into my firewall, I do some inspections there, then eventually that will get routed to my servers untrusted. And then within my untrusted, it'll hit this Metal LB IP, then there's traffic running the reverse proxy that will look at that request, and that will route it accordingly to the service or the pod that's running within Kubernetes. Actually, I made a mistake, because these two are supposed to be down here, whatever, I'm not going to mess with the tool right now. But I actually have two Metal LB IPs in my untrusted, not in my trusted. But yes, I'm still using traffic as my reverse proxy. And I'm also using CertMgr. Along with that, CertMgr is a Kubernetes service that helps me provision and store my certificates as secrets. So rather than traffic go out and get those certificates for me, CertMgr will do that for me, then I can use them as certificates. So I can scale both traffic and CertMgr and kind of separate those concerns. So I don't use traffic anymore to get my certs is the moral of the story. I use CertMgr to do that. But all of these work in cooperation to help me route traffic securely over TLS monitoring and logging. I've had a lot of changes to this. Yes, I still use Uptime Kuma to do my internal monitoring and logging. But it works great. As you can see, most of the time as I don't have any statuses being reported, this should update here in a second. I do wish that Uptime Kuma actually didn't use SQLite because it's kind of I don't know, not to dunk on SQLite. But it gets corrupted a lot, probably my fault. But I'd rather have it with an external database that's neither here nor there. Anyways, I still use Uptime Kuma, it's great, use it for my internal stuff. My external stuff, I use Uptime Robot. And you're probably like, well, why do you use one internal and one external? Well, my internal Uptime Kuma monitors a lot of things that aren't even exposed publicly. The only way to do that is within my cluster within my infrastructure. And then Uptime Robot is kind of watching the watcher. But it watches everything that I have publicly hosted to make sure that that's still up. And as far as Loki, Grafana and Prometheus goes, sad to say, I ended up pulling these out of my cluster. So when I was running my OneU servers, I had plenty of headroom, plenty of CPU, plenty of disk space to support these types of services. But after I moved to my Intel NUX, I realized that they were a little too busy running these services. Now, don't worry, I know it's important to monitor and to log in to visualize your data. I'm going to be bringing those back when I move my OneU servers to a colo. Be sure you're subscribed to see that video. But when I move those to a colo, I'm going to bring these services back and then leave those only for my public facing stuff. But I still might bring them back internally for my other things. But I need to find a lighter weight way of doing that home automation. Yes, I'm using Home Assistant. And it's actually running in my Kubernetes cluster. I think last year, I was using Homebridge to get some of my devices into iOS into HomeKit. But I decided to get rid of that and just use Home Assistant to do the same thing. As you can see, I have lots of devices in here. I've spent lots of time to get them in here. And I have some automations and other things as well. It runs great, as you would assume that Home Assistant would run. And this is the reason why I have Home Assistant, as you can see this one k8s node that's running here in the IoT VLAN is so it can communicate with all of these devices, rather than putting it somewhere over here and creating lots of firewalls to do that and DNS and everything else that comes along with a complexity. I decided that, hey, all of my k8s nodes are hardened anyways. And it was a lot easier to put it here to live with these devices to keep this traffic isolated and really get the benefits out of VLANs. Now, there's so much you can do with Home Assistant, I've only scratched the surface. And you could probably make a career out of Home Assistant if you wanted to. But right now, I only use Home Assistant kind of as an interface to get a lot of my devices into HomeKit, so that I can access them there from my phone or from a tablet or from my Apple TVs. And so with that, I'm actually running scripted as well. And so scripted is kind of like an interface to that allows you to import your cameras into the home hub of your choice. Now it works with a wide range of cameras, and it will allow you to import them into say, Google Home or into HomeKit. Now, as I mentioned earlier, I run unify protect, but I still want to see those cameras within HomeKit and maybe trigger some automations with my HomeKit stuff based on that. And since unify protect isn't compatible with HomeKit, this is where scripted comes in. And so all scripted is is one big interface, it's all local, and all it does is expose my cameras in a HomeKit friendly way. So I can get them into HomeKit. And you can do the same with Alexa, Google Home. And if you want to see a video on that, I have a video on that too. But really cool open source project. Everything that I've tried and I've used works perfectly in here. It's almost as if my cameras are HomeKit certified. So another thing in my home automation is this broad link control. This is something that I built a long time ago. So I can control my broad link devices, I can control them through a UI. But I also created an API that I can control them. So I can remote control those lights back there during my Twitch stream. As you can see, if I click on flash, wait a second, this should flash those lights back there. And then if I click on back to blue, should stop and they should just go to blue. So I wrote this a long time ago, I should probably just use Home Assistant to do that now. And then write code against Home Assistant instead of writing custom code against my custom solution. I might get more mileage out of writing it against Home Assistant now. But that also includes rewriting some of my bots. So hey, maybe next year. But this is free and open source. It's out on GitHub if you're interested. Next is data sync. And I think I'm going to get rid of this category because I'm not doing any data synchronization anymore, at least not with sync thing. So I was using sync thing to synchronize data from server to server. So then I could back it up. But now that I've simplified my NAS and simplified a lot of my network and my virtual machines, I don't need to do that anymore. So I got rid of sync thing. It's great if you need it, but I outgrew it now because I don't use it. So the only syncing I think I'll do in the future is synchronizing my snapshots in ZFS inside of TrueNAS to some external source at some point. Next up is links page and I'm still using my own links page. This is free and open source. It's on GitHub if you want to use it. But it's a really easy way to create a I think good looking homepage that is a collection of all your links. So you can send people to this one page, and they don't have to go anywhere else to get to all your contacts or any of the links you want them to get to. I'm sure there are better solutions out there, but I built this myself. So I'm voting for myself in this category. But there are a lot of other link page type solutions out there. But I think mine turned out pretty good. And I've had a lot of people contribute to it. So I'm still going to use it link shortener. So I use a link shortener for a lot of my links so that I have control over where those links go. Not only do they look nice, but I can change them if things change. For example, if my GitHub page changed, I wouldn't have to go through all of my videos and recreate all of those descriptions and edit them with a new link. I can control it here. And I can say, Hey, my GitHub page is now this and all it does really is just redirect someone to the link that you want. Click on this GitHub redirect link, and it sends you to my GitHub page. So you can see these get used quite often. And it also means that I have to have this highly available, which I do, I run, I don't know, three or four of these pods. The back end is Postgres, which is super nice. It's a database on the back end that I manage that is also highly available. And all of that runs in my Kubernetes cluster. But great solution if you need a link shortener, Schlink is absolutely the way to go. It's modern, feels nice, uses a modern data store like Postgres. And you get a little bit of metrics and some reporting and stuff like that. But really, really nice solution. As far as home entertainment goes, I use Plex and I've been using it for years. And it's great. I've had a Plex pass for I don't know, eight or nine years. And it gives me everything that I need and more. One of the big reasons why I choose Plex is for recorded TV. I record a lot of over the air stuff that I saved to my NAS that I can watch later, skip commercials and all of that, that I don't have to pay for services like Hulu or Peacock or anything like that. I just record them when they're live, skip the commercials when I watch them, and it works great. And they have a great EPG too, that's always up to date. And their electronic programming guide is hands down the best one that's out there. I've used a lot all the way back to Windows XP Media Center edition. So I've used a system like this to record my TV over a long period of time. Anyways, you know what Plex is, I've been using it, it works great. And Plex is living on a Windows Virtual Machine, believe it or not. So I have a Windows Virtual Machine in Proxmox that has a GPU pass through to it, and it was doing some tasks already. And I had this on Docker only. And I thought, well, why am I doing that? Let me just install Plex on here. And so Plex is actually running on a Windows machine, it's running pretty good. Soon, I might attempt to move it to Kubernetes. But I don't know if I'm ready for that pain. Minimally, I am going to move it to a Linux VM here next year, or maybe a little sooner. So power management, I'm still using nut server or network UPS tools, I monitor three UPS is you can see there's something going on with the input of this one. It's nice to know that I have three UPS is being monitored. This is the one that's in the rack for my servers. And then you can see I have my second one. So this is the one that's on the wall in my server room. This is reporting on that. And then you can see the last one was the one that's in rack for network only. And you can see information on that. So I use nut server not only to visualize this, but also gracefully shut stuff down. Although I did turn some of it off, because it kind of scares me sometimes that it could just automatically shut stuff down. So I need to tune my rules a little bit more. But if you want to automatically shut stuff down and start stuff up, when your UPS gets low, you could definitely do that. And then along with nut server, I also use the vendor supplied UI to manage my UPS is so I have one for Eaton and one for trip light. And that just gives me a little bit more configuration for notifications and things like that. content management systems, I still use three content management systems. I use WordPress for one of my old blogs, I use ghost blog for my wife's portfolio. And I use wiki JS for a community based wiki for my discord community. Now I'd love to get down to one CMS system, but they all do stuff slightly different. And they all have different use cases. So I guess I'm stuck with three again this year, I think I could move my WordPress blog to ghost. But that's a lot of work for very little value. But yes, still three content management systems all do things a little bit different. And they approach content management a little bit different, but all find solutions, static site generator. So I'm using two of them, I'm using both Hugo and Jekyll. So if you're not familiar with static site generators, you can basically write a website and markdown and it will create the markup for you or the website. So basically, you only need to fill in the content, and it will generate a good looking website for you. For instance, my documentation site is based on Jekyll. So I write all of this in markdown, I run a process, and it generates the markup for me and I can deploy it and it's hosted in my cluster. And as you can see, it creates a pretty good looking website for only writing markdown. And the same goes for Hugo, as you can see, this is a Hugo site, and you're going to get the same thing, you write some markdown, it generates markup, and then you can deploy a website with it. Now I probably could get down to one static site generator, but I like using two to keep myself familiar with both for CI and CD or continuous integration and continuous delivery. I'm actually using two solutions for this. I'm using GitHub actions runners that are hosted within my network in my cluster. And I'm also using GitLab runners to build and compile different pieces of code and deploy things to my cluster. Why two? Well, because I use both GitHub and GitLab. And so hosting the runners myself ensures that one, I'm not running my code on shared runners, not that I'm that worried about anything leaking, but I don't have to worry about that. And then two, I get priority and unlimited time and resources for those CI jobs to run. If I run them internally, I don't have to worry about their limits. So even though the code is hosted externally, I run my runners internally. And as you can see, with this commit right here, it ran a build, it ran a container, and it deployed to my Kubernetes cluster. And the same goes for my GitHub actions on GitHub. But it's pretty cool to self host these runners and build your code and deploy your code or your infrastructure to your own internal infrastructure. It's pretty awesome. And then there's everything else in my environment. I'm running custom code for bots on Twitch, Twitter, Discord. I'm running Longhorn for storage within my Kubernetes cluster, running netboot XYZ, which really doesn't fit into a category. But I use that to network boot my devices so I can install operating systems on them. And the list goes on and on and on. And as you can see, I'm pretty passionate about self hosting. Well, I learned a lot this year about self hosting services, consolidating and securing services. And I hope you learned something too. And remember, if you found anything in this video helpful, don't forget to like and subscribe. Thanks for watching.