> FastMCP is the standard framework for building MCP applications
Standardized by whom?In an era where technology exists that can lend the appearance of legitimacy to just about anyone, that kind of statement needs to be qualified.
UPDATE: I was wrong about this, see comment reply. The python-sdk in https://github.com/modelcontextprotocol is a fork of FastMCP.
Is there some sort of tool that can be expressed as an MCP and but not as an API or CLI command? Obviously we shouldnt map existing apis to MCP tools, but why would I used an MCP over just writing a new "agentic ready" api route?
- If your agent doesn't have a full Bash-style code execution environment it can't run skills. MCP is a solid option for wiring in tools there.
- MCP can help solve authentication, keeping credentials for things in a place where the agent can't steal those credentials if it gets compromised. MCPs can also better handle access control and audit logging in a single place.
With MCP you can at least set things up such that the agent can't access the raw credentials directly.
(Moved from wrong sub)
Also, I run programs on my machine with a different privilege level than myself all the time. Why can’t an agent do that?
The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.
Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.
As the article states, LLMs are fantastic at writing code, and not so good at issuing tool calls.
Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.
"Just going to try using sed to get the output of curl https://.."
"I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".
Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.
You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.
- MCPs are trivial to write and maintain - at least in my experience and language of choice - and bash scripts are cursed. But I guess you can use a different scripting language.
- Agents can pollute their context by reading the script. I want to expose a black box that just works.
A skill is, at the end of the day, just a prompt.
A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.
Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.
When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).
Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.
MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.
Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.
There'd be a little extra friction compared to MCP – the agent would presumably have to find and download and read the OpenAPI/Swagger spec, and the auth story might be a little clunkier – but you could definitely do it, and I'm sure many people do.
Beyond that, there are a few concrete things MCP provides that I'm a fan of:
- first-class integration with LLM vendors/portals (Claude, ChatGPT, etc), where actual customers are frequently spending their time and attention
- UX support via the MCP Apps protocol extension (this hasn't really entered the zeitgeist yet, but I'm quite bullish on it)
- code mode (if using FastMCP)
- lots of flexibility on tool listings – it's trivial to completely show/hide tools based on access controls, versus having an AI repeatedly stumble into an API endpoint that its credentials aren't valid for
I could keep going, but the point is that while it's possible to use another tool for the job and get _something_ up and running, MCP (and FastMCP, as a great implementation) is purpose built for it, with a lot of little considerations to help out.
Then you’d need a way of passing all that info on to a model, so something top level.
It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).
And then you’ve just reinvented something like MCP.
It’s just a standardised format.
I think MCP is fine in an env where you have no access to tools, but you cannot ripgrep your way through an MCP (unless you make an MCP that calls ripgrep on e.g. a repo, which in that case what are you doing).
For client side MCP it's a different story.
Seems like unnecessarily constraining it.
On the other hand, something like context7 is just `npx ctx7 resolve <lib>` then `npx ctx7 docs <id>` — two stateless shell calls, done. No server to maintain, no protocol overhead. CLI is the right tool there.