AI api MCP

MCP vs. APIs: Why You Need Both for AI Applications 

I’m a huge fan of Model Context Protocol (MCP) and have been fortunate to talk about it at several conferences and to customers lately. In those talks, a few questions often come up: What’s the difference between MCP and APIs? Is an MCP server just a wrapper around my API? If so, why do we even need one?

For me, the tl;dr is that APIs are for services, while MCP is for LLMs. They serve different consumers and solve different problems. But this isn’t an either/or choice. Today we need both. In this article, I break down the differences between APIs and MCP, explain when to use each, and show how API gateways can help you manage both.

APIs in the age of AI

If you’re a developer, chances are you already know what an API is, so I’ll keep this short. An API defines a contract between services. You specify endpoints, request and response formats, error handling, and authentication requirements. When a client (be it mobile, web, or service) needs to fetch user data or integrate with a third-party system like a payment processor, you call an API.

The key takeaway is that APIs are designed for applications. As a developer, you write code to call endpoints, parse responses, handle errors, and decide what to do with the data. You follow a procedural workflow where the application is in control. APIs remain essential in agentic applications. When you begin building MCP servers, you’re probably calling APIs, databases, and other services under the hood. 

What is MCP?

Model Context Protocol (MCP) is an open standard that’s barely a year old and has taken the world by storm. It enables AI models to connect data and actions in a structured, consistent way. It was recently donated to the Linux Foundation, a signal of its importance to the broader community.

MCP was created to solve a real problem. Large language models (LLMs) have limitations, like stale data, a tendency to hallucinate (make things up), and a lack of access to your private or proprietary data. MCP helps alleviate some of these problems by providing a standardized way to augment context for an LLM. Instead of building provider-specific integrations, you build one MCP server, which any client that supports the protocol can use. Examples include ChatGPT, Claude, Cursor, IntelliJ, or any other client that supports the protocol (and this list continues to grow).

An MCP server exposes three primitives to interact with an LLM:

  • Tools – Executable functions that the model decides to call. Think querying a database, searching the web, or accessing customer records.
  • Resources – Data sources provided directly to the AI as context, such as file contents, logs, or documentation.
  • Prompts – Reusable templates that help users accomplish specific tasks without writing complex prompts from scratch.

The key differentiator between APIs and MCP is who’s in control. In the case of an API, the application decides what to call and when. With MCP, the model determines what tools to invoke based on the user’s request.

MCP is not just a wrapper around your API

When developers first come across MCP, their first instinct is often to wrap their existing API in an MCP server. This isn’t the right approach, and I recommend thinking differently about what MCP is actually for and who’s consuming it.

Yes, an MCP server may call an API under the hood, but that’s an implementation detail. You need to design MCP servers with LLMs in mind from the start. Tokens are the currency of LLMs. Every tool definition, every response, every piece of context consumes tokens from the model’s context window. If you simply wrap your existing APIs that return 50 fields when your model only needs 3, you’re wasting tokens and money. Worse, you risk context rot. The model gets confused by too much information, which can lead to performance degradation.

My recommendation is to start fresh with this mindset. Your goal should not be to throw as much data and functionality at an LLM as possible. Think about what the model needs to accomplish, then design your tools around that. Instead of returning 1,000 users subscribed to product X and letting the model calculate the number of users and current revenue on the fly, return that data directly.

The obvious question you’re probably asking yourself is when do you use each? APIs remain the right choice for service-to-service communication. When your mobile app or web front end needs data, call an API. MCP is the right choice when an LLM is the consumer. In many cases you’ll probably need both. Your web app calls the API directly. Your AI assistant embedded in the same application uses the MCP server. Same underlying data, different interfaces.

The role of API gateways 

If you’re running APIs in production, you’re probably already using a gateway. If you’re in the Spring ecosystem, you might be using Spring Cloud Gateway. If you don’t want to manage a gateway, you might go up the abstraction ladder and reach for a platform like Tanzu Platform. Gateways handle cross-cutting concerns like authentication, rate limiting, and observability. You don’t want to reimplement these in every service.

The same is true for MCP servers. At the end of the day, an MCP server is just another app. As your MCP server traffic grows, you need the same level of governance. Gateways and platforms can manage both API and MCP traffic. This gives you unified security and observability for application-driven and AI-driven requests in one place.

There is another use case worth considering. Coding agents, like Claude Code and Cursor, need access to MCP servers to be useful. But should you let developers install any MCP server they find? Probably not. In the enterprise, users can’t freely install any applications on their machine. There’s usually a preapproved list or process for doing so. The same should be true for MCP servers. A gateway gives you a predefined list of approved MCP servers that developers can choose from. Your team gets the productivity benefits of AI tooling, while the organization maintains control over what data and systems those tools can access.

MCP and APIs: It’s an ‘and.’ Not an ‘or’

APIs and MCP are not competing technologies. In fact, they are complementary. They solve different problems and serve different consumers. APIs remain the right choice for service-to-service communication and traditional application development. MCP gives LLMs a standardized way to access your data and take action on your behalf.

The key takeaway here is to make sure you design each for its intended consumer. Don’t take the easy way out and wrap your API in an MCP server and call it a day. Think about the model’s current limitations and what it needs to be successful. Be intentional about token usage, and build your MCP servers from the ground up. Use a gateway to manage cross-cutting concerns for both, and you’ll have a foundation that serves your applications and your AI agents.

In the end, it’s not a choice between APIs and MCP. We need both.

Want to learn more? Check out these resources: 

[Demo] Production-Worthy AI with Spring AI, MCP, and Spring Security
[Webinar] Extend Your Existing APIs for Agentic Workflows with Spring HATEOAS
[Blog] Building an Enterprise MCP Server Marketplace with Tanzu Platform
[Case Study] How Broadcom’s IT Leverages Tanzu Platform to Achieve Enterprise-Scale Agentic Business Transformation
[Blog] Secure and Scale Your Digital Transformation with Spring Cloud Gateway Extensions[Blog] The Security Gap in AI Applications: Rethinking API Protection for a New Era