Sztuczna inteligencja dla prepersa?! (film, 10 minut)
Data Slayer zaprezentował niezwykle innowacyjny generator z ręcznym napędem, który może zasilać urządzenia, takie jak telefon, zupełnie niezależnie od sieci elektrycznej. Jego prawdziwy potencjał ujawnia się w połączeniu ze nowoczesnym modelem lokalnego LLM, takim jak Mistral 7B V2. To połączenie stanowi jedną z najnowocześniejszych technologii przetrwania, tworząc całkowicie samowystarczalny system, który działa bez potrzeby korzystania z internetu. Dla osób przygotowujących się na najgorsze, taki sprzęt staje się bezcennym narzędziem, które może odpowiadać na pytania medyczne, udzielać praktycznych wskazówek lub instruować w zakresie oczyszczania wody. To przekształca zwykły sprzęt w istotne wsparcie w krytycznych sytuacjach.
Pierwszy krok do zbudowania tego zestawu to źródło zasilania. Data Slayer zakupił generator na Amazonie, który okazał się być wszechstronny. Warto zaznaczyć, że generator generuje 5 voltów przy 4 amperach, co nie wystarcza do zasilania nowszego Raspberry Pi 5. Używanie zasilania do ładowania akumulatora to bardziej praktyczne rozwiązanie, jednak Data Slayer zwraca uwagę, że czas ładowania wynosi około 6,2 godziny, co jest dość długim czasem.
Przyjrzał się również alternatywnym źródłom energii, takim jak energooszczędne turbiny wodne czy panele słoneczne, które mogą być bardziej ergonomiczne. Natomiast do pobierania i uruchamiania modeli językowych na telefonie, Data Slayer poleca aplikację LLM Farm, która umożliwia zarządzanie otwartymi modelami LLM. Cały proces po zdobyciu modelu Mistral 7B GGUF dokonuje się poprzez przygotowanie aplikacji do współpracy z tym modelem, co staje się kluczowe w zapewnieniu dostępu do najpotrzebniejszych informacji w przypadku kryzysu.
Wideo pokazuje, jak przy pomocy prostych aplikacji na smartphone można uzyskiwać dostęp do językowych modeli umożliwiających zdobycie wiedzy na ważne tematy przetrwania. Przykładowo, odpowiadając na pytanie o to, co zrobić w przypadku spotkania z niedźwiedziem, generuj ąc odpowiednie i bezpieczne zalecenia. Choć Mistral 7B jest ogólnym modelem, istnieją bardziej precyzyjne modele dostosowane do konkretnych dziedzin, jak MedLlama, które mogą np. odpowiadać na pytania dotyczące medycyny.
Na koniec Data Slayer podkreśla, że dzięki nowoczesnym technologiom lokalnym i rozwojowi alternatywnych źródeł energii, prawdopodobnie będziemy blisko wielkiej zmiany w sposobie, w jaki korzystamy z technologii. Statystyki tego wideo pokazują, że zostało ono wyświetlone 18,436 razy i zdobyło 625 „lajków”. To świadczy o dużym zainteresowaniu tego typu nowinkami technologicznymi i ich zastosowaniem w codziennym życiu przygotowujących się na najgorsze.
Toggle timeline summary
-
Wprowadzenie do generatora z napędem ręcznym, który działa niezależnie od sieci.
-
Połączenie generatora z lokalnym dużym modelem językowym (LLM) w celu stworzenia samowystarczalnego systemu.
-
Generator odpowiada na pytania medyczne i uczy podstawowych umiejętności przetrwania.
-
Omówienie ograniczeń tradycyjnego sprzętu do przetrwania w porównaniu z dostępem do informacji za pośrednictwem AI.
-
Wprowadzenie do budowania zestawu generatora i jego wszechstronności.
-
Napotykanie wyzwań przy próbie zasilania Raspberry Pi za pomocą generatora.
-
Rozważania na temat szybkości ładowania i alternatywnych źródeł zasilania.
-
Omówienie znaczenia pobierania modeli językowych na telefon.
-
Wyjaśnienie opcji działania modeli językowych na urządzeniach.
-
Instrukcje dotyczące pobierania aplikacji o nazwie LLM Farm do korzystania z otwartych modeli LLM.
-
Szukam modelu Mistral 7B, aby skonfigurować go z zainstalowaną aplikacją.
-
Wybór zrównoważonego modelu jakości dla efektywnego wykorzystania.
-
Demonstrując praktyczne pytanie dotyczące przetrwania o napotkaniu niedźwiedzia.
-
Wprowadzenie do specjalistycznych modeli na konkretne pytania dotyczące przetrwania, takie jak nagłe wypadki medyczne.
-
Zamykające myśli na temat przyszłości lokalnej AI i alternatywnego wytwarzania energii.
Transcription
This is a manual hand-crank-powered generator, and it can power devices like my phone, completely independent of the grid. But its true potential is unleashed when paired with a state-of-the-art local LLM like the Mistral 7B V2. This combo elevates it to one of the most innovative pieces of survival tech I've encountered, creating a fully self-sufficient, closed-circuit system that operates without power or internet, something every prepper can appreciate. It's capable of answering medical questions, providing practical how-tos, or even instructing you on how to purify water, saving you from having to resort to Bear Gryll's more extreme and rather unique hydration strategies, turning this hunk of metal scrap and silicon into a lifeline, in the literal sense of the word. Don't get me wrong, a bunker full of MREs and some lifestraws is certainly a start, but that will only get you so far, and being able to google for life-saving advice could mean the difference between living in the stone age, or ushering in a brave new future of local AI. Because when the water wars begin, there might be one resource that proves even more valuable, information. Let's build our rig and demonstrate its utility as a vital asset in the Doomer's toolkit, in the face of the most dire survival scenarios. So first, we need a power source. This is essential. So I bought this off Amazon for 46 bucks, and it seems pretty versatile. It came in a nondescript package with instructions that were clearer than I expected. I could feel my sense of newfound independence already. So I set up my Raspberry Pi single board computer and got cranking to see if I could turn it on by providing a direct current to my device. Kids, don't try this at home. And it worked. Well, kind of. So basically, this thing can generate 5 volts at 4 amps for its USB interface, which is not enough for my 5 volt, 5 amp Raspberry Pi 5. And look, there might be a way to get around that with intermediaries and batteries and such. But as I was building that, it dawned on me, even though the Raspberry Pi touchscreen looks super cool, and there are companies looking to build all-in-one AI in a box like products, it's probably a lot more practical to just charge your phone than run the LLMs there. Additionally, we would really want to use the power source to simply charge a battery, then the battery power can be doled out to charge your devices as needed. Probably obvious by now, but charging the battery is slow. I didn't have the patience to charge the whole thing, so I did the math on it instead, and it turns out it would take 6.2 hours to charge this battery pack. But this thing is pretty big. But as far as battery goes, the crankshaft generator is probably not even the best. You could definitely explore solar, wind, hydro, or even pedal-based generators that are more ergonomic and might spare you the inevitable carpool tunnel resulting from this one I have. I've had some fun playing around with this hydroelectric turbine that you can just place in the flow of any moving water. It'll also generate a 5 volt current, and it's pretty easy to use. It's called a water lily. Okay, so now that we're harnessing the infinite energy glitch, we need to be able to download and run the small language models on our phone. Intriguingly, alongside your language models, you have the capability to back up the entirety of the world's information. For instance, Wikipedia's comprehensive corpus occupies nearly 30 gigabytes, fitting effortlessly onto a microSD card like this. Okay, so there's different ways you could run these models. I've shown in prior videos how you could run them on Linux-based single-board computers like the Raspberry Pi or the Xenoblade, and there's some benefits to that. Like, you have a little bit more low-level access. So for instance, you could spin up Olama on those devices, and you can kind of rotate through the different models that you want to use. And you could use more sophisticated models like, for instance, Lava, which is a vision-based model which you can feed an image. So, you know, think you're in a kind of disaster scenario, and you come upon some sort of berry or mushroom. You could take a picture of it, provide it to the model, and the model's going to be able to tell you whether it's poisonous or not, which I do find some utility in. But it takes a lot of work just to get that kind of added medium by which to use the model. So, I think for most people, what makes the most sense is just using your phone and interacting with text-based models. And also, if you did have some sort of food that you weren't sure about, you could just describe it in text, and you'd probably get the same answer. So I don't think there's any big stepwise shift in using an image-based model. So the way we're going to make this happen is, if we go to the App Store, we are going to download a program that can run these kind of like open-source LLMs. So there's an app called LLM Farm, and it will help us do just that. So we're going to go ahead and download it, and then let's go ahead and open this up. Okay, so this looks good, but there's no models, right? So we're going to go to Chrome, and we're going to go to Hugging Face, a kind of repository for these weights for different models. And we are going to search for Mistral 7b. But the crucial element here is the model format. We want a gguf type model, and based on what I've seen, most models offer that. So let's do Mistral 7b gguf, and we also want the v2 of Mistral. Why do we want Mistral? Mistral is a model that's designed to be very performant and accurate with minimal resources. So it's quantized, it's small, but it still rates very well. And they just came out with a new one, the v2, so we're going to use that. So I'm going to click this first one here. The author is The Bloke. And for each model, you still see different sort of variants of the model. Like if we come down here, we're going to get this table here. On the left, we see the model. Then we see the quantization method. We see the size of the model. These are going to be several gigs each. We also see max RAM required, right? So the big models down towards the bottom, I mean, some of these require 10 gigs of RAM. They're going to give you better results, but they're going to be slower. You know, it depends on your degree of patience. These Q4 models, they strike a good balance, and those will respond to you, at least on my iPhone Pro Max. It responds to me at a pace comparable to what I can read at. So it's very much just like a chatbot. So let's go ahead and grab the guy that says medium balanced quality recommended. And it is a Q4. So I'm going to go ahead and click that. And then on this page here, I'm just going to click this download button. And again, it's going to be several gigs, so make sure you have some space on your phone. Okay, so the next thing we're going to do is come over to LLM Farm. We're going to click this plus button, and we're going to do select model, import from file. Okay, so I just searched for Mistral here, and I see this guy. I'm going to select it. I am going to give it a fun avatar. And let's go down to prediction form, or sorry, prompt format. So there is a string of text that we need to throw in here. There will be a full tutorial guide in the description below, so go ahead and click that if you want the exact text you need here. Otherwise, you can copy it off the screen. Okay, and then we're going to go down to prediction options, and we're going to enable metal and enable mlock. And then we should be good to go. Okay, so now we have it configured. I'm going to click into it, see if we can't get a chat going. Okay, so sticking to the theme of survival tech, let's ask it a very practical question. What should I do if I encounter a black bear? If you encounter a black bear, the first thing you should do is remain calm. Do not run or make sudden movements that could startle the bear. Two, make yourself known. Speak in a loud voice to let the bear know that you are human and not prey. You can also try banging pots or other objects together to make noise. Okay, so I gotta say this is a great answer, particularly for one of these small models. The intuition to want to run from a giant bear is going to be pretty overwhelming, and this sort of counterintuitive advice could actually save your life in those scenarios. And just as a note, Mistral is really good, but it is a general purpose model. So if you wanted a specific model fine-tuned, you know, for a particular domain, for instance, like medical questions, you could use something like MedLlama or MedAlpaca. So let's see here, this guy. Yeah, this is a model that is going to be better suited to answer questions like, you know, how to treat a snake bite or put down a fever and things like that. So this might be a good one to have handy as well. Could save your bacon. So while I may not be your typical prepper, my passion for tinkering has me convinced that we're standing on the brink of an incredible shift that might make de-clouding more of a reality. Whether it be training local GPTs on private documents, running smart sensors with Home Assistant, Plex media servers, facility surveillance with Coral AI Edge Acceleration, or even leveraging Meshtastic's algorithm for disaster comms. When you combine these amazing local technologies with the rise of alternative power generation through wind, solar, hydro, and yes, even hand crank, it feels like the tech advancements are finally ready to connect us with the earth and with each other in ways we're just beginning to understand. To dive deeper down the rabbit hole, click this next video. Thanks.