Nope. None of it is. The experimental nature is part of the fun and part of the flex. I think there's little incentive to keep it very stable or safe.
In general, experimental technology tends to take 3 or so years to solidify. But AI is different... models from 6 months ago are already deprecated. Not a lot has ever solidified in this space.
I second that. MCP is more of a scaffolding concept than a solution itself. There is no safety in consulting an LLM on anything. What you're building is a RAG app that must coordinate LLM calls and tool calls. Any safety built into that is going to built into your procedural code (in a programming language) and not coming from an LLM, which by itself cannot be "corralled."
MCP is just a way of specifying which user prompts go with which LLM calls and tool calls and provides no safety (or even functionality) of its own.
Nope. None of it is. The experimental nature is part of the fun and part of the flex. I think there's little incentive to keep it very stable or safe.
In general, experimental technology tends to take 3 or so years to solidify. But AI is different... models from 6 months ago are already deprecated. Not a lot has ever solidified in this space.
I second that. MCP is more of a scaffolding concept than a solution itself. There is no safety in consulting an LLM on anything. What you're building is a RAG app that must coordinate LLM calls and tool calls. Any safety built into that is going to built into your procedural code (in a programming language) and not coming from an LLM, which by itself cannot be "corralled."
MCP is just a way of specifying which user prompts go with which LLM calls and tool calls and provides no safety (or even functionality) of its own.
IMO, it's wiser to focus on RAG (Retrieval Augmented Generation) and utilizing vector db's as MCP is essentially an abstraction on top of that idea.
In essence, MCP is the next esoteric acronym that can be hyped/used to get attention.