Artificial Intelligence for Preppers?! (film, 10 minutes)
Data Slayer has presented an incredibly innovative hand-crank generator that can power devices such as a phone completely independently of the grid. Its true potential is revealed when paired with a modern local LLM model like Mistral 7B V2. This combination represents one of the most advanced survival technologies, creating a fully self-sufficient system that operates without needing the internet. For preppers, such gear becomes invaluable, capable of answering medical questions, providing practical how-tos, or instructing on water purification. This transforms what could be just another piece of equipment into a crucial lifeline in critical situations.
The first step in building this rig is to secure a power source. Data Slayer purchased a generator from Amazon that proved to be versatile. It’s important to note that the generator can generate 5 volts at 4 amps, which is insufficient for powering the newer Raspberry Pi 5. Using the power source to charge a battery is likely a more practical solution; however, Data Slayer points out that charging time is around 6.2 hours, which can feel lengthy.
He also explored alternative energy sources, such as energy-efficient water turbines and solar panels that can be more ergonomic. To download and run language models on a phone, Data Slayer recommends the LLM Farm app, which allows management of open-source LLMs. The entire process after acquiring the Mistral 7B GGUF model involves setting up the app to work with this model, which is critical for ensuring access to vital information in crisis situations.
The video demonstrates how simple smartphone apps can be utilized to gain access to language models that provide knowledge on important survival topics. For example, when asking what to do if encountering a bear, the app generates appropriate and safe advice. Although Mistral 7B is a general-purpose model, there are more precise models fine-tuned for specific fields, like MedLlama, which can answer medical questions.
In conclusion, Data Slayer highlights that with modern local technologies and the development of alternative power sources, we are likely on the brink of a significant shift in how we interact with technology. At the time of writing, the video statistics indicate it has been viewed 18,436 times and has received 625 likes. This shows a strong interest in such technological innovations and their practical applications in everyday prepping.
Toggle timeline summary
-
Introduction to a hand-crank-powered generator that operates independently of the grid.
-
Combining the generator with a local large language model (LLM) to create a self-sufficient system.
-
The generator answers medical questions and teaches essential survival skills.
-
Discussing the limitations of traditional survival gear versus accessing information through AI.
-
Introduction to building the generator setup and its versatility.
-
Encountering challenges while trying to power a Raspberry Pi with the generator.
-
Considerations for charging rates and alternative power sources.
-
Discussing the importance of downloading language models onto the phone.
-
Explaining options for running language models on devices.
-
Instructions to download an app called LLM Farm for using open-source LLMs.
-
Searching for the Mistral 7B model to set it up with the installed app.
-
Choosing a balanced quality model for efficient use.
-
Demonstrating a practical survival question about encountering a bear.
-
Introducing specialized models for specific survival questions such as medical emergencies.
-
Closing thoughts on the future of local AI and alternative power generation.
Transcription
This is a manual hand-crank-powered generator, and it can power devices like my phone, completely independent of the grid. But its true potential is unleashed when paired with a state-of-the-art local LLM like the Mistral 7B V2. This combo elevates it to one of the most innovative pieces of survival tech I've encountered, creating a fully self-sufficient, closed-circuit system that operates without power or internet, something every prepper can appreciate. It's capable of answering medical questions, providing practical how-tos, or even instructing you on how to purify water, saving you from having to resort to Bear Gryll's more extreme and rather unique hydration strategies, turning this hunk of metal scrap and silicon into a lifeline, in the literal sense of the word. Don't get me wrong, a bunker full of MREs and some lifestraws is certainly a start, but that will only get you so far, and being able to google for life-saving advice could mean the difference between living in the stone age, or ushering in a brave new future of local AI. Because when the water wars begin, there might be one resource that proves even more valuable, information. Let's build our rig and demonstrate its utility as a vital asset in the Doomer's toolkit, in the face of the most dire survival scenarios. So first, we need a power source. This is essential. So I bought this off Amazon for 46 bucks, and it seems pretty versatile. It came in a nondescript package with instructions that were clearer than I expected. I could feel my sense of newfound independence already. So I set up my Raspberry Pi single board computer and got cranking to see if I could turn it on by providing a direct current to my device. Kids, don't try this at home. And it worked. Well, kind of. So basically, this thing can generate 5 volts at 4 amps for its USB interface, which is not enough for my 5 volt, 5 amp Raspberry Pi 5. And look, there might be a way to get around that with intermediaries and batteries and such. But as I was building that, it dawned on me, even though the Raspberry Pi touchscreen looks super cool, and there are companies looking to build all-in-one AI in a box like products, it's probably a lot more practical to just charge your phone than run the LLMs there. Additionally, we would really want to use the power source to simply charge a battery, then the battery power can be doled out to charge your devices as needed. Probably obvious by now, but charging the battery is slow. I didn't have the patience to charge the whole thing, so I did the math on it instead, and it turns out it would take 6.2 hours to charge this battery pack. But this thing is pretty big. But as far as battery goes, the crankshaft generator is probably not even the best. You could definitely explore solar, wind, hydro, or even pedal-based generators that are more ergonomic and might spare you the inevitable carpool tunnel resulting from this one I have. I've had some fun playing around with this hydroelectric turbine that you can just place in the flow of any moving water. It'll also generate a 5 volt current, and it's pretty easy to use. It's called a water lily. Okay, so now that we're harnessing the infinite energy glitch, we need to be able to download and run the small language models on our phone. Intriguingly, alongside your language models, you have the capability to back up the entirety of the world's information. For instance, Wikipedia's comprehensive corpus occupies nearly 30 gigabytes, fitting effortlessly onto a microSD card like this. Okay, so there's different ways you could run these models. I've shown in prior videos how you could run them on Linux-based single-board computers like the Raspberry Pi or the Xenoblade, and there's some benefits to that. Like, you have a little bit more low-level access. So for instance, you could spin up Olama on those devices, and you can kind of rotate through the different models that you want to use. And you could use more sophisticated models like, for instance, Lava, which is a vision-based model which you can feed an image. So, you know, think you're in a kind of disaster scenario, and you come upon some sort of berry or mushroom. You could take a picture of it, provide it to the model, and the model's going to be able to tell you whether it's poisonous or not, which I do find some utility in. But it takes a lot of work just to get that kind of added medium by which to use the model. So, I think for most people, what makes the most sense is just using your phone and interacting with text-based models. And also, if you did have some sort of food that you weren't sure about, you could just describe it in text, and you'd probably get the same answer. So I don't think there's any big stepwise shift in using an image-based model. So the way we're going to make this happen is, if we go to the App Store, we are going to download a program that can run these kind of like open-source LLMs. So there's an app called LLM Farm, and it will help us do just that. So we're going to go ahead and download it, and then let's go ahead and open this up. Okay, so this looks good, but there's no models, right? So we're going to go to Chrome, and we're going to go to Hugging Face, a kind of repository for these weights for different models. And we are going to search for Mistral 7b. But the crucial element here is the model format. We want a gguf type model, and based on what I've seen, most models offer that. So let's do Mistral 7b gguf, and we also want the v2 of Mistral. Why do we want Mistral? Mistral is a model that's designed to be very performant and accurate with minimal resources. So it's quantized, it's small, but it still rates very well. And they just came out with a new one, the v2, so we're going to use that. So I'm going to click this first one here. The author is The Bloke. And for each model, you still see different sort of variants of the model. Like if we come down here, we're going to get this table here. On the left, we see the model. Then we see the quantization method. We see the size of the model. These are going to be several gigs each. We also see max RAM required, right? So the big models down towards the bottom, I mean, some of these require 10 gigs of RAM. They're going to give you better results, but they're going to be slower. You know, it depends on your degree of patience. These Q4 models, they strike a good balance, and those will respond to you, at least on my iPhone Pro Max. It responds to me at a pace comparable to what I can read at. So it's very much just like a chatbot. So let's go ahead and grab the guy that says medium balanced quality recommended. And it is a Q4. So I'm going to go ahead and click that. And then on this page here, I'm just going to click this download button. And again, it's going to be several gigs, so make sure you have some space on your phone. Okay, so the next thing we're going to do is come over to LLM Farm. We're going to click this plus button, and we're going to do select model, import from file. Okay, so I just searched for Mistral here, and I see this guy. I'm going to select it. I am going to give it a fun avatar. And let's go down to prediction form, or sorry, prompt format. So there is a string of text that we need to throw in here. There will be a full tutorial guide in the description below, so go ahead and click that if you want the exact text you need here. Otherwise, you can copy it off the screen. Okay, and then we're going to go down to prediction options, and we're going to enable metal and enable mlock. And then we should be good to go. Okay, so now we have it configured. I'm going to click into it, see if we can't get a chat going. Okay, so sticking to the theme of survival tech, let's ask it a very practical question. What should I do if I encounter a black bear? If you encounter a black bear, the first thing you should do is remain calm. Do not run or make sudden movements that could startle the bear. Two, make yourself known. Speak in a loud voice to let the bear know that you are human and not prey. You can also try banging pots or other objects together to make noise. Okay, so I gotta say this is a great answer, particularly for one of these small models. The intuition to want to run from a giant bear is going to be pretty overwhelming, and this sort of counterintuitive advice could actually save your life in those scenarios. And just as a note, Mistral is really good, but it is a general purpose model. So if you wanted a specific model fine-tuned, you know, for a particular domain, for instance, like medical questions, you could use something like MedLlama or MedAlpaca. So let's see here, this guy. Yeah, this is a model that is going to be better suited to answer questions like, you know, how to treat a snake bite or put down a fever and things like that. So this might be a good one to have handy as well. Could save your bacon. So while I may not be your typical prepper, my passion for tinkering has me convinced that we're standing on the brink of an incredible shift that might make de-clouding more of a reality. Whether it be training local GPTs on private documents, running smart sensors with Home Assistant, Plex media servers, facility surveillance with Coral AI Edge Acceleration, or even leveraging Meshtastic's algorithm for disaster comms. When you combine these amazing local technologies with the rise of alternative power generation through wind, solar, hydro, and yes, even hand crank, it feels like the tech advancements are finally ready to connect us with the earth and with each other in ways we're just beginning to understand. To dive deeper down the rabbit hole, click this next video. Thanks.