Menu
O mnie Kontakt

Fireship przedstawia Model Context Protocol (MCP) jako nowy, gorący sposób tworzenia API, który zyskuje popularność wśród programistów na całym świecie. MCP może być postrzegany jako nowy standard dla aplikacji AI, stworzony przez zespół Anthropic, twórców Claude. Dzięki MCP, interakcja z dużymi modelami językowymi stała się znacznie prostsza, a wideo odkrywa, jak MCP zmienia sposób, w jaki programiści tworzą aplikacje. W przeszłości, twórcy musieli znać różne architektury API, ale teraz, dzięki MCP, jesteśmy w erze 'vibe coders', gdzie koncentrujemy się na rezultatach, a nie szczegółach technicznych.

W dzisiejszym materiale, Fireship precyzyjnie omawia budowę serwera MCP, łącząc różne komponenty — bucket storage, bazę danych Postgres i standardowe REST API. Działa to na zasadzie umożliwienia modelowi językowemu dostępu do danych oraz wykonywania operacji na naszym serwerze. Na przykład, Claude może pisać do bazy danych czy przesyłać pliki, co otwiera drzwi do automatyzacji wielu procesów, w tym handlu stonkami i zarządzania infrastrukturą chmurową.

Ponadto, data wideo jest na dzień 31 marca 2025 roku, co pozwala widzom spojrzeć w przyszłość programowania. Fireship wyjaśnia różnice pomiędzy tradycyjnymi API a MCP, sugerując, że MCP to po prostu API dla API. Podczas gdy tradycyjne API miały mnóstwo różnych metod HTTP, MCP koncentruje się na dwóch kluczowych aspektach: zasobach i narzędziach, co czyni go bardziej przystępnym dla nowych programistów.

Podczas omawiania serwera MCP, Fireship podkreśla znaczenie walidacji danych za pomocą narzędzia Zod, co zapobiega nieprawidłowym danym oraz zapewnia niezawodność aplikacji. Dzięki temu programiści mogą dostarczyć konkretne typy danych, co pozwala modelom językowym lepiej zrozumieć kontekst zapytań. Ekscytacja całej społeczności programistycznej wokół MCP jest zrozumiała, zwłaszcza w dobie rozwoju AI, gdzie CEO Anthropic przewiduje, że do końca roku 90% kodu będzie pisane przez AI.

Obecnie, na czas pisania tego artykułu, film Fireship ma już 1 159 499 wyświetleń oraz 41 836 lajków, co wskazuje na ogromne zainteresowanie tematem. Mówi to także o wpływie MCP na sposób, w jaki tworzymy aplikacje, a także na to, jak programowanie przechodzi ewolucję w erze sztucznej inteligencji. Fireship nie tylko dostarcza wartościowych informacji, ale także inspiruje nowe pokolenie programistów do dalszego eksplorowania możliwości, jakie oferuje MCP.

Toggle timeline summary

  • 00:00 Wprowadzenie do MCP i jego rosnącej popularności wśród deweloperów.
  • 00:11 Przykład, jak Claude projektuje sztukę 3D za pomocą MCP.
  • 00:20 Dyskusja na temat tradycyjnych interfejsów API, takich jak REST oraz ewolucji do MCP.
  • 00:40 Przejście do programowania vibe i eliminacja strażników oprogramowania.
  • 00:53 Wyjaśnienie Model Context Protocol jako nowego standardu API.
  • 01:16 Budowanie serwera MCP i jego potencjalny wpływ na prace biurowe.
  • 01:32 Połączenie różnych źródeł danych za pomocą MCP.
  • 01:46 Przykłady innowacyjnego wykorzystania MCP przez użytkowników internetu.
  • 02:00 Przegląd Savala jako infrastruktury chmurowej dla MCP.
  • 02:15 Porównawcza łatwość użycia Savala w porównaniu do AWS.
  • 02:38 Różnice między konwencjami REST API a podejściem MCP.
  • 03:09 Wprowadzenie projektu prelegenta, Horstender, i przejście na AI.
  • 03:47 Implementacja REST API do uzyskiwania dostępu do danych użytkownika.
  • 04:20 Wykorzystanie Zod do walidacji schematów na serwerze MCP.
  • 05:46 Uruchamianie serwera MCP lokalnie i przegląd procesu wdrażania.
  • 06:05 Łączenie klienta z serwerem MCP.
  • 06:48 Pobieranie zasobów z serwera do kontekstu w zapytaniach.
  • 07:11 Tworzenie nowych wpisów w bazie danych na podstawie kontekstu.
  • 07:29 Obawy dotyczące szybkiego wzrostu AI w programowaniu i jego potencjalnych ryzyk.
  • 07:57 Zachęta do eksploracji nowych narzędzi opracowanych z użyciem MCP.

Transcription

It seems like every developer in the world is getting down with MCP right now. Model Context Protocol is the hot new way to build APIs, and if you don't know what that is, you're NGMI. People are doing crazy things with it, like this guy got Claude to design 3D art in Blender, powered entirely on Vibes. And just a few days ago, it became an official standard in the OpenAI Agents SDK. If you're an OG subscriber to this channel, you probably know what a REST API is. You might even know about GraphQL or RPC, or maybe many years ago you used SOAP. When I was a kid, the software engineering gatekeepers told me I couldn't be a web developer unless I could explain the difference between these architectures and protocols. Well, now the terms of tabled and these gatekeepers have been utterly demolished, because we're all just vibe coders now, embracing the exponentials, pretending code doesn't even exist, and just chilling with LLMs until we get the end result we're looking for. That being said, you can't call yourself a true vibe coder unless you know about Model Context Protocol, which is basically a new standard for building APIs that you can think of like a USB-C port for AI applications. It was designed by Anthropic, the team behind Claude, and provides a standard way to give large language models context. And they're so bullish on this technology that the CEO of Anthropic said he expects virtually all code to be written by AI by the end of the year. In today's video, we'll actually build an MCP server and find out if it can truly make the world a better place by eliminating all white-collar jobs. It is March 31st, 2025, and you're watching The Code Report. And contrary to popular belief, Fireship is still a tutorial channel. In today's video, we'll take a storage bucket, a Postgres database, and a regular REST API, and then connect them all together with the Model Context Protocol. Not only will this allow Claude to have access to data it didn't have before, but it can also execute code on our server, like write to the database or upload files. And people of the internet are already using it to do crazy stuff, like automated stonk and shitcoin trading, industrial-scale web scraping, and as a tool to manage cloud infrastructure like your Kubernetes cluster. Speaking of which, to build our own MCP server, we'll need some cloud infrastructure. And one of the best places to do that is Savala, which itself is powered by Google Kubernetes Engine and Cloudflare under the hood. They were nice enough to sponsor this video, but the reason I like their platform so much is that it's far easier to use than something like AWS, but provides linear, predictable pricing, unlike most of the application and database hosting startups out there. And it's free to get started, which makes it perfect for this project. Now, like other API architectures, MCP has a client and a server. The client, in our case, will be Cloud Desktop. Then we'll develop a server that maintains a connection with that client, so the client and server can pass information back and forth via the transport layer. Now, in a REST API, you have a bunch of different HTTP verbs that you can send requests to via different URLs. But in the model context protocol, we're really only concerned with two main things, resources and tools. A resource might be a file, a database query, or some other information the model can use for context. Conceptually, you can think of it like a GET request in REST. Meanwhile, a tool is an action that can be performed, like writing something to a database, so that'd be more like a POST request in REST. What we do as developers is define tools and resources on the server so the LLM can automatically identify and use them when they have a prompt that needs them. In my life, I've been working on an app I consider my magnum opus called Horstender, but as it turns out, swiping left and right was a bad feature idea because horses don't have fingers. So like every other failing startup in Silicon Valley right now, we're gonna pivot to artificial intelligence. Luckily, we can leverage our existing data and servers. Like here in Savala, I have a storage bucket, and it contains all of the photos that my users uploaded. In addition, for user data, we have a POST REST database. It has all the profile data for each one of our horses, as well as the relationships they form together. And then finally, I have a traditional REST API written in TypeScript that fetches this data for my web, iOS, and Android apps. And what's especially cool about my code is that it's in a Git repo hooked up to a CI-CD pipeline. That means after we write our model context protocol server, we can push our code to the dev or staging branches to test it before it actually goes into production, while Savala automatically handles all the deployments and cache busting for us automatically. And now we're ready to jump into some code. Here I have a Deno project, and the first thing you'll notice is that I'm importing a class called mcpserver. This comes from the official SDK, but if you're not using TypeScript, they have a bunch of other languages like Python, Java, and so on. We'll also be using Zod here, which is a tool used for schema validation, which allows us to provide a specific data shape to the LLM, so it doesn't just hallucinate a bunch of random crap. Now after we create a server, we can start adding resources to it. The resource will first have a name, like horses looking for love, and then the second argument is a URI for the resource. Then finally, the third argument is a callback function that we can use to fetch the data. In this example, I'm writing a query to our POST REST database, which is hosted in the cloud on Savala, then accessed on the server with the POST REST JS library. But you could access any data here. When something is a resource, though, it should only be used for fetching data where there's no side effects or computations. If you do have a side effect or computation, you should instead use a tool. Like for Horstender, we might want the AI to automatically create matches and set up dates between horses. We already have a RESTful API endpoint that can handle that, and we could actually leverage that code here, essentially creating an API for our API. In fact, many of these MCP servers are actually just APIs for APIs. And that sounds like dumb over-engineering, but having a protocol like this makes it a lot easier to plug and play between different models, and just makes LLM apps more reliable in general. Case in point, notice how I'm using Zod here to validate the shape of the data going into this function. That prevents the LLM from hallucinating random stuff here. Basically, when you prompt Cloud, it's going to need to figure out the proper arguments to this function. So providing data types along with a description will make your MCP server far more reliable. And then the final step is to run the server. In this case, I'm going to use standard I.O. as the transport layer to use it locally, but if deploying to the Cloud, you can also use server-sent events or HTTP. Congratulations, you just built an MCP server. But now the question is, how do we actually use it? To use it, you'll now need a client that supports the Model Context protocol, like Cloud Desktop. There are many other MCP clients out there if you don't want to use Cloud Desktop, like Cursor and Winster, for example, and you could even develop your own client, but that's an entirely separate topic altogether. Once installed, you can go to the developer settings, which will bring you to a config file where you can add multiple MCP servers. In the config file, all you have to do is provide a command to run the actual server, which in our case would be the dino command for the main.ts file where we defined our server code. You'll need to restart Cloud, but then it should show your MCP server is running. In this case, my horse is running, which means I should probably go and catch it. Then you can go back to the prompt screen to attach it. That's going to fetch the resource from the server so Cloud can use it as context in the next prompt. And because Cloud is multimodal, you can also add PDFs, images, or anything else to the context, really, like all the horse images in our Savala storage bucket. And now magically, you can prompt Cloud about things specific to your application. Like if we want to find out which horses are single and ready to mingle, we can make a prompt like this, and it will use the context that we just fetched from our database. Then, if we want Cloud to write to the database, we could make a prompt like this, where it'll connect two horses from the context on a date. You'll need to grant it permission to do this, and then Cloud will automatically figure out which data to send it based on the schema we validated with Zod, and it'll use our server tool to mutate data in the actual application. I can't imagine anything ever possibly going wrong here, and Anthropic is extremely bullish on this being the future. Like, their CEO just said that 90% of coding will be done entirely by AI within the next six months, and nearly all code will be AI-generated within a year. I'm gonna go ahead and press X to doubt there, and I think it's only a matter of time before some AI agent accidentally wipes out billions of dollars in customer data, or becomes self-aware and just deletes the data for fun. That being said, though, there's all kinds of amazing tools being built with MCP right now, and you can check those out on the awesome MCP repo. Just please make sure to vibecode responsibly. Huge thanks to Sovala for making this video possible, and enjoy this $50 stimulus check to try out their awesome platform. This has been The Code Report, thanks for watching, and I will see you in the next one.