LLMs vs the rest of the world
Today something happened.
A large company released a new interesting project: Anthropic releases MCP
This is interesting because there’s this thing that has been going on around my head for years.
When LLMs were introduced, I was thinking - ok if us humans are clicking buttons, but LLMs don’t - how can we make LLMs click buttons.
What’s gonna happen to 40+ years of UI design at any level of the stack?
From VIM users to 4 years old clicking buttons the size of my face on tablet, everyone’s clicking and tapping something…buttons, icons - hardware and software.
We quickly realized that it wasn’t enough - so people came up with the idea of AI agents.
AI agents allow the computer to do specific tasks really well and quickly by leveraging something called an AGENTIC FRAMEWORK.
Due to the nature of “moving fast, breaking things” - more and more people started building agentic frameworks
, their own way.
Because everyone just likes to bake their own cake and eat it, the promise of the Model Context Protocol (MCP) is to offer a universal protocol for efficient LLM interaction that supports local and remote services.
The real real answer to building an MCP, is a much more reliable and powerful model that can use your computer WITHOUT any protocol need. You just install the tools that a human would use and the LLM would control your computer directly. No need for abracadabra integrations. If a human can use an app, the LLM should be able to use it too.
Now that’s what I think should be the ultimate version of the Model Protocol.
Let’s see who can build it.