Msty - Local AI Assistant for Managing Notes and Files (Film, 21m)
On the latest video from LeanProductivity, Sascha D. Kasper demonstrates how to transform notes into a private AI assistant using the MISTI tool. Unlike other solutions, MISTI allows users to interact with their files in a conversational manner, requiring no technical skills. This tool operates locally on the user's computer, providing privacy and control over data. It is ideal for those who wish to automate their note-taking processes and gain quick access to information.
The video showcases how easy it is to install MISTI and the OLAMA platform that supports language models. Just a few simple steps are needed to configure everything on one's system. MISTI connects data from local notes in Obsidian, making this process exceptionally convenient. Users can ask questions to their AI assistant, who will easily provide contextual answers based on the notes they have, significantly simplifying the search for information.
MISTI acts like a virtual librarian; instead of scrolling through results, it can find the right material and present it in a comprehensible context. The user interface available in MISTI allows for the definition of knowledge stacks, which are collections of files or notes, and can then be utilized during interactions with the AI. Additionally, the platform offers an easy way to add new information, making it a fully flexible tool.
A major advantage of MISTI is that it does not require internet access during interactions, and all data remains on the user's machine. As mentioned by Sascha, MISTI is free for personal use without serious limitations, making it an ideal solution for individuals who value their privacy and do not wish for their data to be shared with big companies. Furthermore, it offers various language models, enabling users to test different options and tailor them to their personal needs.
As of the time of writing this article, the video on LeanProductivity has achieved 10,845 views and 416 likes. This indicates that the topic of local AI models and their applications in everyday note-taking is gaining popularity, and Sascha's presentation of MISTI's potential will likely encourage many to explore this technology. MISTI is a tool with great potential that can ease the lives of many involved in knowledge management and note-taking.
Toggle timeline summary
-
Introduction to Obsidian Vault's potential for conversation.
-
Overview of using Misty to create a private AI assistant.
-
Comparison of Misty and ChatGPT's functionalities.
-
Introduction of sponsor: Brilliant and its interactive learning platform.
-
Challenges of finding information in notes and the potential AI solution.
-
Explanation of REG (Retrieved Augmented Generation) and its benefits.
-
Demonstration of Misty interface capabilities.
-
Overview of requirements for setting up Misty.
-
Instructions for downloading the OLAMA platform.
-
Characteristics of Misty including its offline-first approach.
-
Picking large language models to integrate with Misty.
-
Introduction to knowledge stacks for teaching Misty about files.
-
Creating knowledge stacks to enhance Misty's responses.
-
Example use case of interacting with Misty based on knowledge stacks.
-
Exploring topics in depth with Misty, utilizing split chat features.
-
Comparing responses from different language models.
-
Encouragement to experiment with local AI models.
-
Final thoughts and community support available on MISTI's Discord.
-
Closing remarks and next steps for viewers.
Transcription
What if your Obsidian Vault could talk? Not just keyword searches, but real conversations. Without the internet, without sending data to big companies. In this video, I will show you how to turn your notes into a private AI assistant using Misty. And the best part of it? There are no technical skills needed, and it's free. Technically, we will not be talking to our files, but about them via a tool called Misty. By the end of this video, you will be able to set it up and use it. Think of Misty as JGPT, but powered by your own files and running privately on your machine. It's your personal AI assistant trained on your knowledge. The best part? Misty is free and easy to set up. If we can read, install software, and are willing to experiment a bit, we will be totally fine. By the way, if things like LLM or REG don't mean much to you, or you just generally want to understand all these AI facts better, take a look at the learning app Brilliant, the sponsor of today's video. Brilliant makes it easy to learn anywhere right on your phone, with fun lessons you can do whenever you have time. Learning a little every day is one of the most important things you can do, both for personal and professional growth. Brilliant is where you learn by doing, with thousands of interactive lessons in math, data analysis, programming, and yes, AI. Brilliant helps you build real knowledge in minutes a day. Whether you want to brush up on fundamentals or challenge yourself with advanced concepts in, for example, computer science, turn your curiosity into comprehension and peek under the hood of large language models like JGPT to understand the concepts powering today's technology and getting your mind ready for future challenges. To try everything Brilliant has to offer for free for a full 30 days, visit brilliant.org. Or scan the QR code on screen, or you can also click on the link in the description. Doing that, you will also get 20% off an annual premium subscription. Happy learning! We all take notes, but finding the right information when we need it, that's another story altogether. Tags, links, and folders help, of course, but what if we could just ask? Instead of scrolling through search results, imagine having an AI that not only finds the right note, but also puts it into context. All of this is possible with a local REG. REG stands for Retrieved Augmented Generation. Think of it like having an AI-powered librarian. Instead of just pointing to books, it reads them and gives you a summary based on what you need. In this case, books can be local files or notes in our Obsidian vault, and MISTI makes that very easy. Let me show you how it looks like just to give you a quick impression of what is possible before setting it up together. This is what MISTI looks like. Don't worry about the details. We will get to those in a bit. For now, you will notice that it looks fairly similar to ChatGPT or other large language model interfaces. It also works in pretty much the same way. I will ask, what were my thoughts on productivity? And after some thinking, I get this response. At the end, we can open the citations. If we do that, we can see that the response is indeed based on my Obsidian vault, which is in the directory d slash lean nodes. However, I feel like I had more than just these few thoughts. Perhaps my current model is not the best for this kind of conversation. I am currently using the XR1 LLM. With MISTI, it is very simple to try others. At the end of the response, we find several icons. For now, we are only interested in this one. Hovering over it gives us a tooltip saying that we can regenerate the response with a different model if we hold the control key while clicking on it. Let's do that and select 5-4 as our model of choice. MISTI branches the answer off and we can then click on this icon to open a split chat window to compare the two different answers. Now, the 5-4 answer actually looks better already. All this is based on my Obsidian vault. Obviously, we can do much more than this. So, let's see what we need and how to make the magic happen. And again, you do not need any technical expertise. Just follow this tutorial. Let's start with the building blocks. Basically, we need two things. A platform on our machine that can host large language models and a user interface to interact with these models. We start with the platform to run our local LLM on. The most common and for me the easiest option is to use OLAMA. We can download OLAMA for free from their website. I left the link in the description. We pick the right version for our operating system, download and install it. Now, we could download LLMs directly from here too. But it's better to do that later from within MISTI because then we see how compatible each model is with our setup and our hardware. Next, we need a way to interact with our local LLM. This is where MISTI shines. MISTI is a lightweight tool that lets us interact with our files and nodes as if they were a chatbot. It connects to local AI models so our data never leaves our computer and it works directly with Obsidian, making it the perfect solution for node takers who want more than just a text search. We can download MISTI from their website. Again, the link is in the description. Make sure you pick the right version for your operating system and processing unit. MISTI can run on a CPU, but as always with these things, a GPU is recommended. If you have a GPU, you can check MISTI's list of compatible ones to see if yours is included. I want to highlight a few more things here. First, MISTI follows an offline-first approach, just like Obsidian. It also leaves all our personal information and data on our machine and does not share it with any online models. And third, MISTI is free forever for private use without any serious feature limitations. As with Obsidian, you can still buy a license to support the developers. And if you don't like to pay the $89 per year end user, they also offer a lifetime license for $179 at the moment. And these are our building blocks. Next, we are going to pick one or more large language models to use for chatting with our files and nodes. Okay, we downloaded and installed Olama and MISTI. We make sure that Olama is running and start MISTI for the first time. When starting MISTI for the first time, we can choose to set up a local AI or work with remote providers. We choose the local AI option. But don't worry, you can always connect to online services later. MISTI recognizes that Olama is already installed and where our local large language models are stored. Since we did not download any LLMs manually before, we first go to the left-hand toolbar and click on local AI models. This will give us a list of LLMs that work with MISTI. It suggests featured models, which are most commonly used, but we always have the option to click on browse and download models online. Here we can pick our preferred model provider. This can be Olama or Hugging Face. Again, I will go with Olama. The models we see here are the same ones we could have downloaded manually, but with the additional information about each model's compatibility based on our machine. While we can install any model, I recommend using the ones that are 100% compatible. As downloading the models takes a bit of time, I did this already. And as you can see, I have multiple models installed. As there is no such thing as the ultimate best model for everyone and everything, I pick and choose the one that works best for me, depending on what I need to do. Technically, this is all we need for using a local LLM. But until now, our local models know nothing about our files and nodes. Before we teach them, let me take you through MISTI's user interface. Not because it is overly complicated, but because it has some nifty features that you don't usually find in other chat interfaces. Don't worry, we are not going to look at each and every button in detail, but we will focus on the most important elements. We start with the left-hand toolbar. There is a button for remote model providers, which lets us control which ones we want to use. We can select one from the rather long list and we'll see the models offered by the respective provider. Of course, we will need an API key for interacting with those. Next, we have local AI models. And this we saw already during the setup. This is what we need to search and install local large language models. The next one is interesting. It lets us define so-called knowledge stacks. Knowledge stacks can be based on various sources. We will see how to set those up for our files and nodes in the next chapter. The next item here is really convenient. It is a prompt library that comes with a lot of predefined prompts for various use cases and also lets us save our own ones. We can then use those in our chats to easily and quickly set the context for our conversations. If we click on a specific one, we will not just see the full prompt, but also an example input and an example output for it. In the user interfaces main area, we can select the model we want to use for our conversation. The options in the list depend on which local models we installed or online models we integrated with. Once we have chosen a model, we can refine the model options with this button. For a better understanding of the various parameters, kindly look at the detailed documentation on MISTI's website. They explain them much better than I ever could. And of course, the link is in the description for you as well. The next icon indicates that real-time data is off by default. This means that any answers we get are based on whatever our chosen model already knows. MISTI will not go online to search for the most current information. Of course, we can toggle this on if we want. Next, we have the quick prompt button, which lets us choose one of the prompts from the previously shown prompt library and insert it quickly into our chat. Finally, we arrived at the knowledge stacks button. Clicking on this lets us choose one or more of our previously created knowledge stacks. If we do that, MISTI will use the information inside the chosen stack or stacks to answer our questions. I will show some use cases of this in a later chapter. And the last button lets us add individual files, notes, or YouTube links to a chat, so the model can use those too. Combining our personal knowledge stacks with the model's inherent knowledge, and having the option to easily get online information too, makes MISTI very flexible. Now, let's create some knowledge stacks with our local files and our Obsidian vault. Now that we have MISTI and at least one large language model installed, let's teach the model how to work with our local files and Obsidian notes. MISTI does this with so-called knowledge stacks. We can think of those like AI-friendly folders. We are telling MISTI which information it should learn from before answering our questions. For this, we go back to our knowledge stacks button. After clicking on it in the left-hand toolbar, we can define knowledge stacks based on various sources. We can drag individual files into MISTI. We can also define our Obsidian vault as a knowledge stack, which automatically adds all our notes to it. Again, this information remains on our machine and is not shared with any online service. We can also define a folder and its subfolders as a source. This could be a folder with documentation or manuals, for example. We also have the option to add custom notes manually, and the last possible source for building a knowledge stack is to provide one or multiple links to YouTube videos. We should take note of the supported file types here. As you can see, they do not yet include zip archives or common office formats such as PowerPoint or Excel. This is not necessarily a problem, but we should make sure to have the right expectations. The other comment here is that adding large folder structures takes very long. I recommend creating multiple smaller knowledge stacks instead of a single huge one. We can also accelerate the process of creating a knowledge stack by telling MISTI to ignore certain folders or file types. To do so, we need to create a .mistiignore file in the root directory of our knowledge stack. For example, I want to create a knowledge stack based on the folder Lean Productivity. But I know that there are many images, archives, and other not yet supported file types in there. So I create a new file called .mistiignore, open it in a text editor, and add these rules. Note that this file must be in the root directory of the knowledge stack. In my case, that's in my OneDrive folder under Documents, Resources, Lean Productivity. As you can see, I first exclude whole folders from indexing and then add the rules for various image formats, office documents, and zip archives. This brings the number of files that MISTI needs to check when composing the knowledge stack from over 2,000 to approximately 200, which is, of course, much faster. As you can see, I created a few knowledge stacks already. They are all based on specific folders except for this one. This one is based on my Obsidian vault. The process is the same. We browse for our vault directory and click on Compose. Depending on the number of nodes in our vault, this can take a bit. Now comes the fun part, actually using MISTI to chat with our files and nodes. Let's walk through a use case based on my knowledge stacks together. Okay, we are in MISTI. I have chosen Lama3.2 as my model. First, I tell MISTI to use two of my knowledge stacks for our conversation. Cheat Sheets is a collection of PDF files on various topics including productivity, leadership, decision making, etc. Lean Nodes is my production Obsidian vault where I have many nodes related to these topics. We start by asking, what are the best practices for decision making? As you can see, we get an answer fairly quickly. The response itself looks reasonable. Let's take a look at the citations to see where this information is coming from. Here we can confirm that MISTI did indeed combine data from various PDF files and some of my nodes. Very nice. Let's continue by asking, which of those are the three most easily implemented? This gives us three solid options, but they seem to be focused on teams or organizations. That's not what I wanted in this case. So let's clarify that by saying, okay, but what are the three easiest for an individual rather than an organization? Now this is better and we could leave it right there. Or we benefit from some of MISTI's advanced features. For example, I am quite familiar with most of these items here. But what exactly is Cognitive Load Management? MISTI can help with that too. We just highlight the term, right click on it and tell it to delve into the topic. This opens a split chat window with a separate discussion about Cognitive Load Management. In its response, MISTI also highlights some terms that it deems relevant. We can click on those to learn more about them. For example, about chunking. And if we find the explanation too long, we can ask it to summarize this in one paragraph. While we're doing all this, our original conversation remains intact and we can easily switch back to it. This feature alone makes MISTI super powerful. Instead of stopping at one answer, you can explore topics in depth, just like peeling back layers of an idea. Clearly, it is very easy to jump from one topic to the next with MISTI. But this can also get confusing. If we do get lost in the various split chats, then we can click on this little icon up here and open a visual map of our conversations. Here's an example where I took our initial discussion and then branched off in various directions. Here we find our initial discussion, the connection or connections to the next split chats. Clicking on an item lets us view the response. This allows us not just to have long and complex discussions, but also to navigate them very easily. I mentioned earlier that you can install multiple LNMs and pick the one that's best for a specific purpose. But how do we know which one is best for what? Well, frankly, I don't have a clear answer for that. It's a bit of trial and error. Fortunately, MISTI makes testing and comparing large language models super easy. Check it out. We are back in MISTI. We have two chat windows open. Our conversation will be based on the knowledge stack of my Obsidian Vault. One chat is based on 5.4 and the other one on Lama 3.2. As you can see, the windows are synchronized. Whatever prompt we enter in one chat will also be used in the other one. Let's start by asking which CSR policies are relevant for TySex. That's a topic I have been working on recently. Not for fun, but it will work as an example. Lama might be a bit faster, but the answer seems very generic. While 5.4 gives me a list of specific policies, let's see if we can fix Lama's response. At the bottom of each response, we can find several icons. One is called Context Shield. If we click on it, then MISTI will ignore any previous context in that specific conversation for future responses. We will also disable the Synchronize for a moment and focus on Lama to ask which CSR policies are relevant for TySex based on my notes. Hmm, looks like this made it even worse. Perhaps Lama is not a great model for these kind of questions. Let's click on Context Shield again and change the model to Deep Seek. We start again with which CSR policies are relevant for TySex. Okay, that's better. This response is similar to what 5.4 gave us in the first place. And this is how we can compare different models. Of course, we are not limited to only two. We can easily open three or four split jets with different models and talk to all of them at the same time. As I said in the beginning, we don't need deep technical knowledge to make this work for us. Just a bit of patience to experiment. So, what do you think about local AI models? Would you use this setup? Let me know in the comments again. I read all of them. Oh, speaking of comments, if you're interested in a detailed MISTI tutorial, let me know there as well. Until then, try experimenting with different prompts and AI models to see what works best for you. And if you run into any issues, there's a great community to help on MISTI's Discord server. I also put that link into the description. I really hope this helped you understand MISTI and local large language models and retrieval augmented generation better. If you need help or have any questions, please don't hesitate to get in touch. And since you're still here, you may want to take a look at this video next. And that's it for today. Thanks for watching and see you next time.