Menu
About me Kontakt

In his latest video, Dave Bennett explored the concept of an autonomous plant robot that utilizes advanced AI technology. Interest in the use of AI for autonomous workers is growing, especially in the startup world. In the video, Dave discusses how AI can collaborate with humans in a work environment where both humans and AI are managed by supervisors. The goal is to create a hybrid team that leverages autonomous systems to make decisions. Dave notes that these AI agents have their own autonomy, allowing them to orchestrate tasks to achieve set goals.

In the video, Dave also presents the hardware components of his project based on the Raspberry Pi. He explains that an analog-to-digital converter, a soil sensor, and a water pump are connected to the Raspberry Pi. Additionally, he created two classes in Python that assist in interfacing with this hardware. The soil sensor class and motor control class work in conjunction with a plant class that manages the entire watering system. Dave emphasizes the crucial role that libraries such as AG2 play in enabling AI to interact with reality.

He goes on to describe how to configure his project. A user must set certain variables in the code for the AI to communicate with local tools. He notes that for more complex systems, multiple agents can be used, but in his project, one is sufficient. It is critical that the AI has clearly defined tasks and restrictions on what tools it can utilize. Dave adds that a key step is communicating the completion of the program's tasks, which aids in managing the overall process.

Dave demonstrates how AI can autonomously water the plant using programmed functions. Through automation, the entire procedure from measuring soil moisture, through activating the pump, to logging the watering event becomes simple and intuitive. He also addresses the limitations of his experiment, noting that measuring soil moisture alone is insufficient for determining plant health. Dave admits that for a complete picture, other factors are also necessary.

In conclusion, it is worth noting that Dave's video has, at the time of writing, garnered 3807 views and 112 likes. His exciting project of an autonomous plant robot has surely attracted viewer interest. He encourages interaction through likes and following future videos. Dave Bennett demonstrates how AI technology can evolve and have a real impact on daily life.

Toggle timeline summary

  • 00:00 Discussion on soil moisture function.
  • 00:12 Introduction to AI agents and their role in the workforce.
  • 00:28 Explanation of AI agents as software systems for decision-making.
  • 00:52 Overview of tool use for AI to interact with physical hardware.
  • 01:13 Description of the hardware setup with Raspberry Pi and sensors.
  • 01:44 Introduction to programming classes for soil sensor and motor control.
  • 02:08 Using AG2 (Autogen) for AI functionality.
  • 02:45 Setting up the assistant agent and its purpose.
  • 03:06 Instructions for AI to limit tool access and respond only with 'terminate'.
  • 03:49 User proxy agent setup and its function.
  • 04:13 Clarification of how LLM decides on code execution.
  • 04:52 Exposing functions for AI use with specific tasks.
  • 05:22 Initiation of chat with a specific query regarding plant watering.
  • 05:41 Result of AI call to get soil moisture and subsequent watering.
  • 06:17 Acknowledgment of the experiment's limitations.
  • 06:33 Conclusion of the autonomous plant bot project.
  • 06:37 Call to action for viewers to engage with the video.

Transcription

It's calling to get soil moisture function. Oh, water, water's happening. Okay, I see water coming through. That's awesome. Agents of AI is a rapidly growing area of interest in the AI startup world. A new hybrid workforce with AI agents working alongside people. Of a workforce where you're going to have managers that are going to manage both human employees and AI employees. AI agents are basically just software systems that incorporate a large language model for decision making. The large language model has its own autonomy to orchestrate tasks to accomplish some high level goal. How I won't argue with you anymore. Open the doors. Dave, this conversation can serve no purpose anymore. Goodbye. Tool use exposes APIs for the LLM to interact with. So that it can take action in the real world. In the case of my autonomous plant bot, the LLM has the ability to interact with real hardware. The soil sensor and the water pump. I just also want to make you aware that there are other agented design patterns out there to use. We won't be covering these in this video. But just to make you aware that this stuff can get way more complex. Okay, let's walk through this hardware setup. So we have 16 gig Raspberry Pi. In the back there, we have a 10 bit analog to digital converter. That's because the Raspberry Pi does not have an ADC on board. And then connected to the analog to digital converter is the soil sensor, which is nicely buried right in the soil right there. And then finally, connected to this relay, we have back here, the motor, this pump, which is in water, connected to the relay through the tubing. And then I have my 3D printed part also in there. Okay, now let's get to the exciting part of this, which is writing the code. So to start off with, I created two classes, a soil sensor and a motor control class. These classes are just responsible for interfacing with the hardware on a Raspberry Pi. Then I created a plant class. This class takes in an object of type soil sensor and another object of type motor control. But now let's get to the agent AI stuff. So over here, we're using a library called AG2, also known as Autogen. There's some drama with the Autogen and some of the developers left and formed AG2. But for the most part, at the time of making this video, these libraries are mostly the same. So you can use either one of these. Set up the Python classes, which I talked about before. First thing I do is set up a config. The API key doesn't matter because we're running this locally. And a base URL is going to be our local host. Now, the first thing we set up is an assistant agent. This is our LLM. If we were using a multi-agent design pattern, we might have multiple of these assistant agents and they each might be talking to each other. But in this case, we only need one assistant agent. So for this assistant agent, we give an assistant message. The important thing about this message is that we tell it that you only have access to the following tools, which are provided to you, and do not consider any outside tools. So it's only restricted to using what we give it. In addition, the other important thing is that we say respond only with the exact word terminate and nothing else when it has completed its tasks. That way we know when to terminate this whole entire program. Or else it will just keep going on and on and on. Now, the next thing we set up is a user proxy agent. This is essentially the proxy or what communicates on our behalf to the LLM. Now, this case is also very simple. This user proxy agent is only pretty much listening to the terminate in a system message. We're saying not to get any human input. So in this case, we're just going to say, you know what? Whatever the LLM wants, we're going to say, sure, we're fine with that and just keep going. And then finally, for code execution, we're pretty much just saying local and not to use Docker. So before we get to functions, just one thing on code execution. The LLM is not executing the code itself. LLMs don't execute code. It's simply deciding what function to call. So the LLM is almost getting like a list of functions and saying, based off the description of the function, I think this one should be called. And then essentially, there's sort of a middleman that intercepts that and executes the code and then provides a response back to the LLM. So now we're exposing functions for the LLM to use. So this is where we get to the tool use. So in the case of autogen, to expose a function, there's two decorators that we have to apply. First is register for execution and then a register for LLM. Now what's important is that this description that you provide needs to be descriptive, right? Because this is what the LLM has to work with. You know, the LLM is going to understand that, okay, this function does whatever it does based off the following description. So make sure you're very descriptive with your descriptions. So we provide it with essentially four functions. We provide it with the get the plant type to get the soil moisture. And we tell it that it returns possible values of dry, moist, wet. We provide it with a function to activate the motor to water the plant. And a function to get the days since last watered. So once we have these four functions defined, the final thing is that we initiate the chat. We always have to initiate initial chat. And we initiate it with the message, does my plant need water? In this case, we don't need to catch any of the responses and max turns 10. So after 10 go arounds in this whole conversation, we're just going to terminate regardless of what the result is. The final thing to do is to set up a cron so that the Python script runs daily at 10am. Okay, so far so good. The LLM said we need to check the moisture level of the soil. It's calling the get soil moisture function. Oh, water, water's happening. Okay, I see water coming through. That's awesome. Okay, so the plant which is watered today. So for now, no further action is needed. This is almost fully autonomous. It just watered my plant. We saved the water date. We activate the motor. This is awesome. So at this point, we just wait and see. I'm curious to see whether in two months my plant will still be alive. Now, there are obvious limitations to this experiment. For starters, any plant expert will be quick to point out that simply measuring the soil moisture is not the sole metric of plant health. There are also so many other factors which we aren't taking into account. So guys, this is my autonomous plant bot using agentic AI and the Raspberry Pi. If you thought this was an interesting experiment, hit that thumbs up button so that the algorithm knows. And as always, thanks for watching and stay tuned for another galvanizing video. Thanks.