Menu
About me Kontakt

Fireship introduces the Model Context Protocol (MCP) as the hot new way to build APIs gaining popularity among developers worldwide. MCP can be viewed as a new standard for AI applications crafted by the team at Anthropic, the creators of Claude. With MCP, the interaction with large language models has become significantly easier, and this video reveals how MCP is changing the way developers create applications. In the past, creators had to be familiar with various API architectures, but now, with MCP, we are in the era of 'vibe coders', focusing on results rather than technical minutiae.

In today's video, Fireship meticulously discusses building an MCP server by connecting various components—a storage bucket, a Postgres database, and a standard REST API. This integration allows the language model to access data and perform operations on our server. For instance, Claude can write to a database or upload files, opening the door for automation of numerous processes, including stonk trading and managing cloud infrastructure.

Moreover, the video is dated March 31, 2025, allowing viewers to look forward into the programming future. Fireship explains the differences between traditional APIs and MCP, suggesting that MCP is essentially an API for APIs. While traditional APIs had numerous HTTP methods, MCP focuses on two key aspects: resources and tools, making it more approachable for new developers.

While discussing the MCP server, Fireship emphasizes the importance of data validation using the Zod tool, preventing incorrect data and ensuring application reliability. This allows developers to supply specific data types, enabling language models to better understand the context of queries. The excitement throughout the developer community around MCP is understandable, especially in the age of AI development, where the CEO of Anthropic predicts that 90% of code will be written by AI by the end of the year.

Currently, at the time of writing this article, Fireship's video has reached 1,159,499 views and 41,836 likes, indicating massive interest in the topic. This also speaks to the impact of MCP on how we create applications and how programming is evolving in the AI era. Fireship not only provides valuable insights but also inspires a new generation of developers to further explore the possibilities MCP offers.

Toggle timeline summary

  • 00:00 Introduction to MCP and its rising popularity among developers.
  • 00:11 Example of Claude designing 3D art through MCP.
  • 00:20 Discussion of traditional APIs like REST and the evolution to MCP.
  • 00:40 The transition to vibe coding and elimination of software gatekeepers.
  • 00:53 Explanation of Model Context Protocol as a new API standard.
  • 01:16 Building an MCP server and its potential impact on white-collar jobs.
  • 01:32 Connection of different data sources using MCP.
  • 01:46 Examples of innovative uses for MCP by internet users.
  • 02:00 Overview of Savala as a cloud infrastructure for MCP.
  • 02:15 Comparative ease-of-use of Savala versus AWS.
  • 02:38 Differences between REST API conventions and MCP's approach.
  • 03:09 Introduction of the speaker's project, Horstender, and pivot to AI.
  • 03:47 Implementation of a REST API for accessing user data.
  • 04:20 Utilization of Zod for schema validation in the MCP server.
  • 05:46 Running the MCP server locally and overview of deployment.
  • 06:05 Connecting a client to the MCP server.
  • 06:48 Fetching resources from the server for context in prompts.
  • 07:11 Creating new entries in the database based on context.
  • 07:29 Concerns about the rapid rise of AI in coding and its potential risks.
  • 07:57 Encouragement to explore new tools developed with MCP.

Transcription

It seems like every developer in the world is getting down with MCP right now. Model Context Protocol is the hot new way to build APIs, and if you don't know what that is, you're NGMI. People are doing crazy things with it, like this guy got Claude to design 3D art in Blender, powered entirely on Vibes. And just a few days ago, it became an official standard in the OpenAI Agents SDK. If you're an OG subscriber to this channel, you probably know what a REST API is. You might even know about GraphQL or RPC, or maybe many years ago you used SOAP. When I was a kid, the software engineering gatekeepers told me I couldn't be a web developer unless I could explain the difference between these architectures and protocols. Well, now the terms of tabled and these gatekeepers have been utterly demolished, because we're all just vibe coders now, embracing the exponentials, pretending code doesn't even exist, and just chilling with LLMs until we get the end result we're looking for. That being said, you can't call yourself a true vibe coder unless you know about Model Context Protocol, which is basically a new standard for building APIs that you can think of like a USB-C port for AI applications. It was designed by Anthropic, the team behind Claude, and provides a standard way to give large language models context. And they're so bullish on this technology that the CEO of Anthropic said he expects virtually all code to be written by AI by the end of the year. In today's video, we'll actually build an MCP server and find out if it can truly make the world a better place by eliminating all white-collar jobs. It is March 31st, 2025, and you're watching The Code Report. And contrary to popular belief, Fireship is still a tutorial channel. In today's video, we'll take a storage bucket, a Postgres database, and a regular REST API, and then connect them all together with the Model Context Protocol. Not only will this allow Claude to have access to data it didn't have before, but it can also execute code on our server, like write to the database or upload files. And people of the internet are already using it to do crazy stuff, like automated stonk and shitcoin trading, industrial-scale web scraping, and as a tool to manage cloud infrastructure like your Kubernetes cluster. Speaking of which, to build our own MCP server, we'll need some cloud infrastructure. And one of the best places to do that is Savala, which itself is powered by Google Kubernetes Engine and Cloudflare under the hood. They were nice enough to sponsor this video, but the reason I like their platform so much is that it's far easier to use than something like AWS, but provides linear, predictable pricing, unlike most of the application and database hosting startups out there. And it's free to get started, which makes it perfect for this project. Now, like other API architectures, MCP has a client and a server. The client, in our case, will be Cloud Desktop. Then we'll develop a server that maintains a connection with that client, so the client and server can pass information back and forth via the transport layer. Now, in a REST API, you have a bunch of different HTTP verbs that you can send requests to via different URLs. But in the model context protocol, we're really only concerned with two main things, resources and tools. A resource might be a file, a database query, or some other information the model can use for context. Conceptually, you can think of it like a GET request in REST. Meanwhile, a tool is an action that can be performed, like writing something to a database, so that'd be more like a POST request in REST. What we do as developers is define tools and resources on the server so the LLM can automatically identify and use them when they have a prompt that needs them. In my life, I've been working on an app I consider my magnum opus called Horstender, but as it turns out, swiping left and right was a bad feature idea because horses don't have fingers. So like every other failing startup in Silicon Valley right now, we're gonna pivot to artificial intelligence. Luckily, we can leverage our existing data and servers. Like here in Savala, I have a storage bucket, and it contains all of the photos that my users uploaded. In addition, for user data, we have a POST REST database. It has all the profile data for each one of our horses, as well as the relationships they form together. And then finally, I have a traditional REST API written in TypeScript that fetches this data for my web, iOS, and Android apps. And what's especially cool about my code is that it's in a Git repo hooked up to a CI-CD pipeline. That means after we write our model context protocol server, we can push our code to the dev or staging branches to test it before it actually goes into production, while Savala automatically handles all the deployments and cache busting for us automatically. And now we're ready to jump into some code. Here I have a Deno project, and the first thing you'll notice is that I'm importing a class called mcpserver. This comes from the official SDK, but if you're not using TypeScript, they have a bunch of other languages like Python, Java, and so on. We'll also be using Zod here, which is a tool used for schema validation, which allows us to provide a specific data shape to the LLM, so it doesn't just hallucinate a bunch of random crap. Now after we create a server, we can start adding resources to it. The resource will first have a name, like horses looking for love, and then the second argument is a URI for the resource. Then finally, the third argument is a callback function that we can use to fetch the data. In this example, I'm writing a query to our POST REST database, which is hosted in the cloud on Savala, then accessed on the server with the POST REST JS library. But you could access any data here. When something is a resource, though, it should only be used for fetching data where there's no side effects or computations. If you do have a side effect or computation, you should instead use a tool. Like for Horstender, we might want the AI to automatically create matches and set up dates between horses. We already have a RESTful API endpoint that can handle that, and we could actually leverage that code here, essentially creating an API for our API. In fact, many of these MCP servers are actually just APIs for APIs. And that sounds like dumb over-engineering, but having a protocol like this makes it a lot easier to plug and play between different models, and just makes LLM apps more reliable in general. Case in point, notice how I'm using Zod here to validate the shape of the data going into this function. That prevents the LLM from hallucinating random stuff here. Basically, when you prompt Cloud, it's going to need to figure out the proper arguments to this function. So providing data types along with a description will make your MCP server far more reliable. And then the final step is to run the server. In this case, I'm going to use standard I.O. as the transport layer to use it locally, but if deploying to the Cloud, you can also use server-sent events or HTTP. Congratulations, you just built an MCP server. But now the question is, how do we actually use it? To use it, you'll now need a client that supports the Model Context protocol, like Cloud Desktop. There are many other MCP clients out there if you don't want to use Cloud Desktop, like Cursor and Winster, for example, and you could even develop your own client, but that's an entirely separate topic altogether. Once installed, you can go to the developer settings, which will bring you to a config file where you can add multiple MCP servers. In the config file, all you have to do is provide a command to run the actual server, which in our case would be the dino command for the main.ts file where we defined our server code. You'll need to restart Cloud, but then it should show your MCP server is running. In this case, my horse is running, which means I should probably go and catch it. Then you can go back to the prompt screen to attach it. That's going to fetch the resource from the server so Cloud can use it as context in the next prompt. And because Cloud is multimodal, you can also add PDFs, images, or anything else to the context, really, like all the horse images in our Savala storage bucket. And now magically, you can prompt Cloud about things specific to your application. Like if we want to find out which horses are single and ready to mingle, we can make a prompt like this, and it will use the context that we just fetched from our database. Then, if we want Cloud to write to the database, we could make a prompt like this, where it'll connect two horses from the context on a date. You'll need to grant it permission to do this, and then Cloud will automatically figure out which data to send it based on the schema we validated with Zod, and it'll use our server tool to mutate data in the actual application. I can't imagine anything ever possibly going wrong here, and Anthropic is extremely bullish on this being the future. Like, their CEO just said that 90% of coding will be done entirely by AI within the next six months, and nearly all code will be AI-generated within a year. I'm gonna go ahead and press X to doubt there, and I think it's only a matter of time before some AI agent accidentally wipes out billions of dollars in customer data, or becomes self-aware and just deletes the data for fun. That being said, though, there's all kinds of amazing tools being built with MCP right now, and you can check those out on the awesome MCP repo. Just please make sure to vibecode responsibly. Huge thanks to Sovala for making this video possible, and enjoy this $50 stimulus check to try out their awesome platform. This has been The Code Report, thanks for watching, and I will see you in the next one.