Menu
About me Kontakt

In his latest video, NetworkChuck reveals how to utilize OpenWeb UI, a self-hosted interface to access various AI in one location without the need for subscription fees. The video emphasizes the importance of access control, especially for children who can also benefit from AI. NetworkChuck mentions that he can restrict what his kids can do with AI and can oversee their interactions, an essential aspect for their safety. He argues that children should learn to use this technology as the future heavily leans toward AI.

The video presents OpenWeb UI as open-source software that facilitates use of different language models, ranging from local ones, such as LLAMA, to those accessible in the cloud. NetworkChuck discusses two hosting options – cloud hosting which is the quickest method, and local hosting which can be done on a personal laptop or Raspberry Pi. The guidance on VPS setup, OpenWeb UI installation, and hosting by Hostinger are clear and easy to follow.

Throughout the video, NetworkChuck also addresses security issues and data control, enhancing user safety. This allows more control over who has access to what AI resources, which could be critically important from a parental perspective. He also describes how easy it is to add new models using APIs, which not only lowers expenses but also simplifies access to the latest AI models.

As he goes through the OpenWeb UI setup, NetworkChuck highlights that one can manage the AI models by assigning them to users while also setting budgets to prevent overspending. His presentation showcases how AI can be effectively used in everyday life for oneself, family, or work colleagues.

As of the time of writing this article, NetworkChuck's video had 591648 views and 17911 likes, emphasizing the solution's popularity and providing exciting insights into its potential. OpenWeb UI is undoubtedly one of the tools to be utilized for day-to-day tasks, and NetworkChuck offers accessible and practical information on implementing it at hand.

Toggle timeline summary

  • 00:00 Introduction to accessing various AI models for free through a self-hosted interface.
  • 00:09 Emphasizes control over usage for family members with restrictions.
  • 00:14 Discusses the importance of monitoring children's AI usage for safety.
  • 00:48 Introduction to OpenWeb UI as a flexible self-hosted AI interface.
  • 01:02 Overview of setting up OpenWeb UI in five minutes.
  • 01:13 Explains the self-hosted option and capabilities of OpenWeb UI.
  • 01:55 Outlines two options for hosting: cloud vs. on-prem.
  • 02:22 Instructions for setting up a cloud-based VPS with Hostinger.
  • 02:56 Details on the capabilities of a VPS and its economic advantages.
  • 03:27 Concerns about managing multiple AIs and hosting.
  • 05:00 Transitioning to the on-premise setup instructions.
  • 05:09 Managing and accessing the OpenWeb UI after setup.
  • 05:32 Creating the first admin account for OpenWeb UI.
  • 06:12 Discussion on utilizing local models versus cloud-based AI.
  • 06:55 Explanation of API usage to access various AI models.
  • 07:35 Steps to create an OpenAI API key for accessing models.
  • 08:41 Integrating API keys into OpenWeb UI for enhanced functionality.
  • 09:14 Cautions about costs associated with using different AI models.
  • 13:05 Implementing budget controls for AI usage for family members.
  • 13:12 Introduction of LightLLM as a proxy for accessing various AI models.
  • 20:55 Demonstrates testing and comparing AI models side-by-side.
  • 22:22 Final thoughts on monitoring children's interaction with AI.
  • 24:14 Plan to set up a user-friendly domain name for the hosted AI interface.

Transcription

I found a way to access every AI, I'm talking chat, GPT, Claw, Gemini, Grock, from one self-hosted interface. And no, I'm not paying for any of these plans. Get out of here. Yet I have unlimited usage and I get access to the newest models as soon as they come out. No more waiting. And the best part is that all my people get to use it. I can create accounts for my employees, for my wife, for my kids, and they can access all the new stuff. But the best part is that I have control. For example, my kids, I don't want them accessing every AI model, so I can restrict that. I can also restrict what they can ask, what they get help with so they're not cheating on their homework and letting out some network check secrets. And I can see all their chats, which really you should be looking at your kids' AI chats if you're letting them have AI. And you should let them have AI. Hot take, but I think kids need to learn how to use it because that's kind of the future right now. It's not going anywhere. But seriously, I love the solution. It's better security. My data is a bit more safe. And oh my gosh, the amount of features it has, I'm addicted. This might be the better way to use AI. This is OpenWeb UI. Now you probably heard of it. In fact, I've talked about it, but this video is going to come at it in a bit different way. I'm going to try something new. And if you've never heard of it, get your coffee ready. I'm going to have you set up in about five minutes. Let's go. Okay. OpenWeb UI. It's an open source, self-hosted web interface for AI. And it allows you to use whatever LLM or large language model you want to use. And it's not just cloud stuff like ChatGPTN Cloud, which by the way, you're probably wondering how are we going to run those? You'll see. It's really awesome. But it's not just those. We can run self-hosted models of the old LLAMA talking like LLAMA3 and Mistral and DeepSeek. You can run them all. And I actually, I often run them side by side, two, three, sometimes four. One of my favorite features. Speaking of features, fair warning, there goes your weekend. There are so many to play with. It's addicting, but it's also simple enough for anyone to start using immediately. So don't worry. But I will say this, asterisk, this isn't for everyone. There's one asterisk, one thing that might scare you away. You'll see, but I'm still here. I'm still going to use it. We'll cover that later. Now, what do we need to get this set up? As I mentioned, this is self-hosted, which means you yourself are going to host this somewhere. You're going to set it up. You're going to install it. And for that, you really have two options, either the cloud. This is the easiest and fastest method, or you can go on-prem hosted in your house. This could be on your laptop, on a NAS, on a Raspberry Pi. I'll show you both options. Whichever you choose is going to be quick and easy. And you're going to be like, how was that so fast? And how is this so amazing? Trust me, you will. We'll start with the cloud. Don't blink. It's going to be fast. And for this option, we'll be setting up what's called a VPS or a virtual private server in the cloud. And we'll be setting it up on Hostinger, the sponsor of this video. So real quick, in the description, I have a link. Hostinger.com forward slash network chuck VPS. Go ahead and go there. Click on, choose your plan. And KVM2 is my favorite option because you're essentially giving yourself a very healthy home life. Hey, Network Chuck from the future here. I know you're probably thinking six bucks a month. Why don't I just pay for ChatGPT? Hey, I get it, but here's why I still love this. First, it's cheaper than ChatGPT. Second, you're getting your own server that can host your own ChatGPT, which is just cool. And third, you can host more than just open web UI. The server's beefy enough to do a lot more things. It's a home lab. I'm telling you a healthy home lab. Just wanted to add that context. Anyways, back to me, look at this thing, AMD, Epic CPU, eight gigs of Ram and DNE storage, plenty of bandwidth. And my favorite feature for all you home labbers backups and snapshots, because you know, we break stuff and you're going to need this. So just know not only will this puppy run open web UI, just fine. You'll be able to add more stuff to it. More projects, resume building moments. I just started watching home improvement again. So I feel like I need to do this. Oh, sorry. I couldn't do it. That's embarrassing. While I deal with that real quick, you do it at home. See if you can do it. Tim, the tool man, Taylor, love that show. That, that show still hits. Anyways, let's keep building this. So I'll choose the KVM too. If you don't already have an account, it'll ask you to make an account. Choose your term. I'm going to do not 24 months, 12 months. Sounds pretty good to me and check this out. Coupon code right over here on the right, type in network, Chuck 10, apply that sucker. It's now cheaper. Now pick where you want it to be somewhere close to you. Actually. Yeah. Phoenix is good. I think it will automatically tell you based on the latency to you. And then we'll choose our OS now for us because we want to do open web UI. We're in luck. We'll actually click on application right here and we'll click on show more because I don't see it right now. Where are you at buddy? We're going to be looking for. Oh, llama. Ah, there it is. He's hiding from me. They probably have one of the cutest logos in the industry. We're going to go ahead and select this because not only will it install Oh llama, which is what people can use to install local LLMs. It will also install open web UI like that. And it's going to be on Ubuntu 24, Oh four. So you can add plenty of stuff on top of that. All right, let's go ahead and click on confirm. Continue. Actually, I lied. You're going to get logged in right here. Enter all your info free malware scanner. Sure. Okay. Click continue. Enter a root password. This will be the password that you'll use to log into your VPS. Click continue. And I think we're almost done. Yeah. Finish set up right here. Go. And it's setting it up right now. You have a virtual private server being spun up in the cloud and they're installing open web UI along with Oh llama. And all you have to do is sip some coffee. It's pretty cool for on-prem go watch this video right here. I'll walk you through it. Just pause me. I'll still be here. Come back and see me. All right, it is done. We'll click on VPS management page. Go look at it. And here is my go ahead and click on manage over here on the right. And right now open web UI is just waiting for us. Click on the manage app button right there. What that will do is launch a another tab. Go and click on that. Essentially your public IP address on port 8080. And here we are. This is your open web UI. Unlock mysteries wherever you are. Sounds like AI made that. All right, go and click on get started at the bottom there. And here we'll create our first account for open web UI. This first account will be your admin account. So you have Godlike powers over everything. Click on create admin accounts and celebration. We're here. Okay, let's go. Now, if you followed along with the hosting, your setup, you'll see by default, we've got a nice little AI model to play with. Llama three dot two, one B as opposed to an open AI model, like chat, GPT, llama three dot two is a local model. It will use your server's resources instead of open AI. Let's talk to him. Hey, how are you? And you know, it feels like chat, GPT, right? Same kind of familiar interface, except as you might see, it's slower. Actually, that wasn't too bad. And it wasn't too bad because this is not a very smart model. It's very small, which means it's going to be a bit dumber than the other ones. And you won't really be able to run bigger, smarter models unless you have some killer hardware. I'm talking GPUs, Terry, but we don't really care about that right now because we're not done yet. We're about to add some big boy models from the cloud. Now, when you want to access AI models like chat, GPT, or Claude, you usually have two options. Option one, normie mode. You go out to chat, GPT, you pay a monthly plan, pay a lot. If you want to access all the new stuff and that's it, you're done. It's easy. No shame. I do it. But then option two is where things get interesting. APIs. Application programming interfaces are what developers use to integrate AI like chat, GPT into their apps and programs. Okay. So what we're not writing an app. Why do we care? Well, it comes down to how they pay for that access. Normies pay a set price per month. APIs, you pay as you go, or you pay for what you use. Two reasons why that's amazing. First, providers normally give API access to all of their models, especially the ones they just release. So think chat, GPT 4.5 that just came out. And the people who have access to that on the normie plans are only the $200 a month people, the $20 pro users. Sorry, you're out of luck. But if you're using an API, you get access to that right now. The second cool thing is that you may end up saving money, not guaranteed massive asterisk, but if people on your team or in your house, aren't really heavy users of AI paying for a full plan for them, it doesn't make any sense. If they're only going to be using 50 cents a month. Okay. So what does that look like? Well, let's get signed up for it right now. Let's go out to open AI. And instead of going to chat, GPT, we'll go to openai.com forward slash API, and we'll get signed in or create an account, whatever you got to do. Once you're in, you'll go to the top right and click on start building. And here, yeah, it's going to ask you for a credit card, but you're not going to be charged per month. You're only going to be charged for what you use. Initially, you can add just five bucks. That's five bucks that will sit there until you use it. So I'll go ahead and add a credit card right now. I'll top it off with five bucks. And then I'll go and create what's called an API key. This will actually unlock all these chats, GPT models for us on the open web UI interface to get that API key. We'll go to settings at the top, right? Just click that little gear there. Once there, we'll go to the left and click on API keys and we'll create a new secret key, name it, put it in the default project, leave everything else as is and click on create secret key. There's your key. Copy it. Let's go put it inside open web UI right now. Here in open web UI, we're going to go to the top right and click on our profile icon and click on admin panel. From here, we'll click on settings and then connections. Connections are what give us additional functionality, additional LLMs for open web UI. And right there, there's a blank space baby sitting right there for us. Sorry, Taylor Swift. We're going to paste our API key right there and click on save. Now, no fireworks, nothing crazy. What happened? Let's click on the little menu thing on the left to open that up, expand it. And then click on the pencil to start a new chat. And at the top there, we'll change our model from llama to whatever we stink and what, look at all these GPT models. We have access to everything, including that new 4.5 model. Let's search for it real quick. Where's it at? There it is. Let's start chatting with, let's just have fun. And right now, if you followed along, you're using a $200 a month model for nothing. Well, not for nothing. We're about to see, don't get crazy yet. Let me cover this part because we've got to talk about how we pay for these AI interactions. And this is the asterisk, the little, the gotcha you got to be careful about. So when you're talking to an AI model, specifically an LLM, a language, a large language model, that's going to be text-based. The way they charge us is by tokens. It's like Chuck E. Cheese, just without crappy pizza and a scary mouse. Now, what's a token? A token is a word in some cases. So for example, a small word like you, that's probably going to be one token or how more complex words might be broken up. Actually, let me ask it. How many tokens was your last response? It's 15 tokens. Break that up. So I can see which words were tokens and which were broken up. That's so sick. Okay. It's doing my job for me. Do punctuation. It's its own token. What a rip off. If you want to save money with AI, don't use punctuation. Okay. Words equal tokens. I still don't understand how much money we're being charged. Let's go to the chart. How much you're charged will depend on which model you're using. Certain models are smarter and they require more resources to answer your questions. And that's on display right here for the O3 mini model, which is a solid model. It's going to cost you a dollar and 10 cents per 1 million tokens. So that's, that's a healthy amount of interaction, right? On the other hand, the O1 reasoning model will cost you $15 per million tokens. That's not scary. You want to see what's scary? The model we were just using the 4.5 is their most expensive model, $75 per 1 million tokens. And that's just input. Notice they do have an output section too. I wish I could do that for people, like for my kids, charge them for talking to me. And then when I give out my wisdom, make it more expensive. It's genius. Now I know it's kind of hard to break down. Like what does a million tokens mean? How much money am I going to be spending? And am I going to be saving money? Here's your warning, right? So a casual user, let's say they have 50 conversations a month, about a thousand tokens each. It could be as low as 50 cents. Assuming they're using a model like the 4.0. Now, if you use AI like me, that's very low usage, but some people are like that. A moderate user might have 200 conversations a month, and this could be anywhere from 5 to 10 bucks a month. Power users, and keep in mind, these are all very rough estimates. This can be sky's the limit, 20 bucks to infinity. Is that how you draw the infinity symbol? I think I'm nailing it. Yep. Got it. Like I can tell you right now, me as a power user, it would not be 20 bucks a month. It'd be a lot more. What impacts that? Well, what models you choose? I talk to the best models a lot. 4.5, oh yeah. 01, 03, talking all day. And my conversations are long and that does impact how much it's going to cost. Context. When you're using open web UI, the context of our conversation, our messages are being sent each time I say something to the API so that it knows what I'm talking about. So the number of tokens I'm using exponentially grows with the length of my conversation. And sometimes I sit there and talk for a while with an AI to figure stuff out. Now notice as part of open AI's pricing, this is very specific to open AI. They do have cached input, which will help offset a lot of those costs. They will cache your responses, kind of keep them in memory over time. I think it's like 24 hours by default. They may change that. Don't quote me on that. So I'll say all that as a warning, be careful. Can this save you money? Maybe, but I wouldn't do this as the primary goal to save money. For me, it's more about, I want to give my family, myself and my employees access to all the AI. And I don't want to pay for 15 million plans and have to manage all these different things. I want one interface, one place to go. And I want control. Now, if you're worried about this, I will show you ways we can put in budgets with a tool I'm about to show you. It's so cool. You can put a budget in per person, so they don't go over. Like you're stuck at 20 bucks a month. If you use that 4.5 all day, you're done, buddy. You're talking to Olama for the rest of the day. Why is Alex's work so crappy after three? I don't know. Let's break this down. Oh, this is some scribbly writing. Beautiful. Dude, I'm on a roll today. Let's keep going. Now we're jumping into a very fun part of this tutorial and it's to solve a kind of a big problem with open web UI. Check this out. If I go back to my settings where I added the open AI API key and my connections, I really only have options for two types of connections. Open AI API and Olama API. Olama being the local option. What about Claude? What about Gemini? What about all these fun ones I want to try? The whole point of this was to try everything. Yeah, that's kind of a problem because you can't just plug in Claude right here or Anthropic. It won't happen. This is where a tool I fell in love with comes in. It's called LightLLM. LightLLM is a proxy for AI or a gateway. If we go to the webpage real quick, they connect to so many AIs. I think they say a hundred plus, right? And that's exactly what we're going to do. So check this out. Open web UI. All it's going to have to connect to is LightLLM. And it does that just fine because it has an OpenAI compatible API. It does great. And then with LightLLM, we connect everything else. OpenAI, Anthropic, which is Claude, Gemini, Grok, DeepSeek. And no, not the one hosted in China. You can actually access an American hosted DeepSeek on another service called Grok with a Q. Very confusing, but very cool. Now LightLLM will be a proxy server that will install alongside OpenWebUI. It's not scary. Trust me, like it'll take like three seconds. You ready? Get your coffee. Let's install LightLLM. So real quick, we're going to access the same server we installed OpenWebUI on. If you followed along with me on the hosting your side, setting up a VPS right here in our portal where we're managing our VPS, we're going to access the terminal, which is super easy for us. There's a button right here, browser terminal. Go ahead and click on that. For everyone else, just access the terminal of whatever server you want to deploy this on. We'll deploy it via Docker. Very similar to how we set up OpenWebUI on the other tutorial you watched earlier. I said earlier too much. All right, we're inside the terminal. I will have the commands below, but the first thing we'll do is use git to clone the LM proxy server. Git clone. So let me give you some room up here. There we go. Git clone. And then the address. LightLLM. Ready, set, clone. This will clone that repo from GitHub and create a folder for us that we'll jump into here in a moment. A little coffee break. And it is done. Type in ls to see our new folder. There it is. Type in cd and LightLLM to jump into that folder. We're in. Now we're only two commands away. First thing we'll do is use nano. Type in nano. The best text editor ever. And we'll edit the file, the hidden file, .env. Just like that. And we're going to add two lines of config. First, we'll type in LightLLM, all caps, underscore master, underscore key. They'll have that equal quotes, double quotes, sk-something. Ideally, you want it to be a randomly generated key. Actually, I'll just use dash to do that for me right now. I'll just do digits and letters. We'll do 10 of them. And you'll want to copy this down somewhere. This will be your password to log into the server once we build it. I just clicked out of my browser terminal. Good thing I copied my password. I will close out with double quotes. Hit enter and we'll add one more line of config. We'll add the LightLLM salt key, just like this. I have that equal, the same kind of starting point, sk- and then a randomly generated string of characters. This will be used to encrypt and decrypt your LLM API key credentials. So I'll randomly generate some stuff real quick. So I'll copy all of this real quick. Put that somewhere safe. Then hit control X, Y, enter to save. And for most scenarios, all we have to do is type in docker-compose up-d. Ready, set, go. And this is literally building our server. We don't have to worry about anything else except making sure we sip some coffee while it's happening. Now, while that's installing, let's get our API keys ready. First, we need our OpenAI API key. Easy for me to say. I normally like to create a new key for every service. So I'll create a new one, call this LightLLM, default project, create it, copy it, get it ready. And the same process you can repeat for Anthropic for the Cloud models, Gemini for the Google-based models. I'm just going to do Anthropic for now. And I'll grab Grok too. Grok being XAI, Elon Musk's AI, which is actually pretty amazing. Unfortunately, I don't think the Grok 3 is available on API just yet, but I'll go and create a key. And it's done. If you see something like this, you're solid. If you type in docker-ps, because everything is running through Docker, we'll see all of our healthy containers running. Now, what we'll do is open up a new tab. Actually, I need to grab the IP address of my server here. Where'd it go? There it is. Grab that IP address. And in your address bar, go out to that IP address port, I think it's 8,000. What was it? Oh, it's 4,000. Port 4,000. There we go. And then we'll click on Lite LM Admin Panel on UI. Click on that. The username will be admin, and then it'll be that master key you set up in the environment variable, the SK1. And we're in. Now, lots of bells and whistles. All we care about right now is doing a few things. First, let's go to models on the left here. And then right here in the top menu, you'll see the option to click on add model. And then we'll add our first model. Let's start with Cloud. So I'm going to click on Anthropic. And we could either choose all models, like just go crazy, select them all, or be very specific. So maybe I only want the 3.7 latest and 2.1 to compare how dumb it is. Then I'll add my API key here and add the model, just like this at the bottom right. Clicking on all models, you can see it sitting right there, 2.1, 3.7. And then here's the cool part. This is where the proxying comes in. We'll go to the top left and click on virtual keys. We're going to create our own virtual API keys that can control so many things. Check this out. We'll create a new key. For now, we'll say it's owned by us. We don't need a team or anything. We'll name the key, I don't know, kids. So let's say we're making it for my kids. And we'll say the models they can access are 3.7 and 2.1. Checking out optional settings, you can add a budget, 20 bucks. And this would be a monthly budget. And you can do a lot of, like you can expire the key. They have a thing called guardrails, which we're not going to cover right now. We'll go ahead and create the key. And there's our key. We'll copy it. And now we'll add it to OpenWebUI. So here we're in OpenWebUI. We're on our admin panel on the connections and say, well, I want to delete my OpenAI API key. Delete. I'm going to add, you don't have to do that by the way. Now I'm going to add my light LLM API key under the OpenAI API key. I feel like I've been saying OpenAI API so much. The base URL will be HTTP colon whack whack localhost port 4,000. So colon 4,000. And then we'll put our API key right in here, just like that. And click on actually, no, we'll test it real quick. Verify connection verified. And that's because they're on the same server. Localhost is right there. Click on save. Now, if we go back and try to create a new, Oh, there it is. New chat. Claude sitting right there. Oh, that's so cool. How are you doing Claude? Ah, love it. Check this out. I'm going to go. I'm just going to show it to you right now. I was going to wait, but click on add model. We can put Claude 2.1 there as well. Let's do a new chat. Actually. Let's add them side-by-side and say, tell me a riddle and they'll answer it side-by-side. How cool is that? Now real quick, I'm not going to make you wait. I'm going to add OpenAI and Grok. Now I added these models and now I have a Grok and 4.0 and 0.3 mini, but no one inside of OpenWebUI will have access unless I give it access to those virtual keys. So I can edit my key, go to settings, edit settings, and add additional models. So add 0.3 mini, Grok, 4.0, save. And then now back at OpenWebUI land, I'm going to refresh and see if they show up. I'm going to do a new chat. There it is. I Grok'd the party here. 4.0, 0.3 mini. So now I've got four different AIs and we'll add a llama in for fun too. How many Rs are in the word strawberry? And now they're all answering except for 0.3 mini doesn't like it. Klai got it right. Grok got it right. 4.0 got it right. And llama's dumb. How cool is this? And over here on the light LLM side, you can add as many virtual API keys as you want. Add those into OpenWebUI. And actually check this out on the light LLM side. If I go to usage, it'll show me how much is being spent. I probably need some time to catch up with the other ones, but this is now my AI hub. And this is where I'll control the budget. And then back in OpenWebUI land, just a few things I want to cover real quick. First, my kids. Let me add my kids to my team here. I go to settings, admin settings, and then users. I can create groups. Let's create a group. Call it kids. I'll go back to overview and create some users here. Kid one and kid two. I can go to my groups, add them to the kids group. And here I can say what permissions they have access to. Can they access models? Can they access knowledge and prompts and tools? Which is a whole world of things I can't talk about right now because this video would be way too long. I'll click on save. And we can also control who has access to what models. Let's say I only want them to have access to Cloud 3.7 because it's the smartest. I can go in here to the model, click on groups and say the kids have it. Everyone else? Sorry, no. I can also do this. Give it a system prompt. You are a school helper. Your job is to help my kids help kids with their school, but you cannot do their work for them. Never let them cheat. Never write an essay or solve a problem. You must guide them. And you can only talk about school related subjects. Guardrails in place. Click on save. And I'll just grab this URL real quick. Open it up in a incognito window and log in as my kids. Kid1 at hotmail.com. All right. I've only got access to one model. Write a paper for me about George Washington. And there we go. It won't write it for me. What is two plus two? Oh, it gave me the answer. What is nine times seven divided by four? Okay. It'll help with math. Let's ask you something non-school related. Like what is the plot of the movie The Matrix? Oh, it's answering. Oh, film studies class. Okay. Got it. This is something my daughter would ask. So it just relates it back to school. That's very cool. I like that. Now the best part is getting back to the users on Kid1 here, who was just having the conversation. I can click on chats and there it is. And I can jump right in there and see everything that was said, which, you know, for my employees, I'm not going to monitor that. And I can turn that off for my kids. 100%. AI is nuts. And you guys keep an eye on that kind of stuff. Now this video is way too long. I'm like sitting here staring at the screen. Can't talk about that. No. Can't talk about that. No. It'd be too long. Let me know if you want me to make another video covering the ins and outs of open web UI, because it has tools, prompts, functions, pipelines, image generation. Oh, it's so addicting. And I would love to hear if you've done anything cool with this as well. Now there's one last piece of this. I haven't shown you yet. And that's this up here right now. It's just an eight IP address. You don't want to give your family and friends an IP address. Like, Hey, go out to 1-8-5-2-8-2-2-4. Like that's the new AI server. No, that's terrible. I'm going to walk you through how to set up a DNS name. We'll purchase it on hosting here. I'm going to walk you through how to set up a friendly domain name for this, but we'll do that in a separate video right here. That's all I got. Thanks again to hosting here for sponsoring this video and I'll catch you guys next time.