Sunday, December 7, 2025

Model Context Protocol and MCP's Three Core Interaction Types

LLMs can chat like humans, write blogs, and even book meetings—but scaling them is a nightmare. Every new tool traditionally needs its own custom setup and code, creating a web of fragile bridges. The world of artificial intelligence is moving fast. Every week, it seems like there’s a new tool, framework, or model that promises to make AI better. But as developers build more AI applications, one big problem keeps showing up: the lack of context. Each tool works on its own. Each model has its own memory, its own data, and its own way of understanding the world. This makes it hard for different parts of an AI system to talk to each other. MCP solves this by acting as a central hub: one protocol, many tools. It is a new standard for how AI tools share context and communicate. It allows large language models and AI agents to connect with external data sources, apps, and tools in a structured way.

The Problem with Disconnected AI Tools : Imagine you’re building a customer support chatbot using a large language model like GPT. It can generate great responses—but it knows nothing about your actual customers. To make it useful, you connect it to your CRM (Customer Relationship Management) i. e Salesforce, Microsoft dynamics 365, HubSpot, and Zoho-CRM for customer records. Then you link it to your ticketing system for open cases. Maybe you add a knowledge base for reference.

Each integration is a separate job: writing custom API calls, formatting data, managing authentication, and handling errors. Every new data source means more glue code. The LLM doesn’t naturally know how to talk to these systems. Now scale that up. You have five or ten tools—an AI assistant, a search engine, a summarization tool, and automation scripts. Each stores information differently. None of them share context. If one model learns something about a user’s intent, the others can’t use it. You end up with isolated pockets of intelligence instead of a connected ecosystem. This is exactly the problem MCP was designed to solve.

Model Context Protocol is an open standard from Anthropic that lets AI applications connect to external tools and data through a single, consistent client‑server protocol. MCP exposes capabilities as tools, resources, and prompts, discovered and invoked over a standardized interface so models can act on real systems without custom one‑off connectors. Under the hood MCP uses JSON‑RPC 2.0 for message exchange, with transports like stdio for local servers and HTTP or SSE for remote ones.Enterprise‑grade authorization for HTTP transports follows OAuth 2.1 style flows, including authorization code and client credentials, so agents can act with least‑privilege tokens.

Instead of building a separate connector for every service, MCP gives the model a universal way to interact. For example, if you say, “Book a meeting with Ram,” the model knows it needs a calendar tool. It sends the request to the MCP client, which forwards it to the right server. The server books the meeting and returns the result, which the client passes back to the model. 

With MCP, it’s not just easier to build—it’s easier to trust, scale, and grow. One standard means faster integrations, stronger security, and a future-proof foundation for enterprise AI.

If the tool runs on your computer, MCP connects directly (like plugging in a cable). If the tool is online, it uses standard web methods like HTTP or Server-Sent Events. To keep things secure, MCP follows OAuth 2.1, the same system trusted by big tech platforms, so the AI only gets the exact permissions it needs—nothing more. This means safer, controlled access without exposing sensitive data.


                                                               AI ↔ MCP Client ↔ MCP Server --->Tools



  • LLM at the top
  • MCP Client as the orchestrator
  • Three MCP Servers each linked to different icons (flight booking, calendar, and database)
  • Local tools: If the tool runs on your computer, MCP connects directly using stdio (like plugging in a cable).
  • Remote tools: If the tool is online, MCP uses standard web methods like HTTP or Server-Sent Events (SSE).
How it works
HTTP: The client sends a request, the server sends back a response, and the connection usually ends.Good for one-time actions like “search flights” or “get pricing.”

SSE :The client opens a single HTTP connection, and the server can keep sending updates over time without the client asking again.Perfect for real-time updates like “flight price changes,” “order status,” or “chat messages.”

This diagram breaks down the Model Context Protocol (MCP) workflow, showing how the Client, MCP Server, LLM, and External Data Source interact.
1. The Client (e.g., Claude Desktop) requests available tools from the MCP Server.
2. The Client sends the user’s query and tool info to the LLM (e.g., Claude). If external data is needed, the LLM suggests a tool.
3. The Client asks the MCP Server to run the tool, which retrieves data from an External Data Source (e.g., GitHub API).
4. The retrieved data is sent back to the Client, which forwards it to the LLM for the final response.

This structured flow ensures the LLM has the right context to generate accurate answers

How MCP Organizes Capabilities

The Model Context Protocol (MCP) changes the way AI applications connect to external systems. Instead of hardcoding integrations, MCP provides a standard interface that exposes three types of capabilities:

  • Tools – Actions the AI can perform, like search flightsbook a ticket, or send a message.
  • Resources – Data the AI can access, such as a calendar filePDF document, or database entry.
  • Prompts – Predefined templates or instructions, for example summarize a document or generate an email.



Above mentioned  three primitives work together to create richer, more reliable experiences. Tools handle actions, resources provide information, and prompts guide the AI’s behavior. By understanding when to use each, developers gain more control and flexibility when building AI-powered applications.


This diagram above  shows that MCP acts as a central hub where three types of capabilities connect

  • Prompts (user-driven),
  • Resources (application-driven), and
  • Tools (model-driven).

All of these flow through the MCP Server, which then interacts with external systems like APIs, databases, and services.

------

Solving the M×N Integration Challenge

source


Let’s understand the core integration problem that MCP addresses.

When building AI applications that need access to multiple organizational data sources, you encounter the M×N problem:
For M different AI applications connecting to N different data sources, you must create and maintain M×N custom integrations.

This results in a complex integration matrix that becomes unmanageable as your organization scales. Every new application or data source multiplies the integration effort, leading to duplicated work across teams.

MCP simplifies this equation. Instead of M×N integrations, MCP reduces the complexity to M+N:

  • Build M clients for your AI applications
  • Build N servers for your data sources

With MCP, you only need M+N implementations, dramatically reducing development overhead and enabling scalable, standardized integration.


How Prompt Works in MCP :

  1. User invokes a prompt
    The user asks the system to run a predefined prompt, for example:
    “Analyze project.”

  2. Client sends a prompt request to MCP Server
    The client calls the MCP Server using prompts/get to retrieve the prompt definition and any dynamic content.

  3. MCP Server fetches live data from external systems
    If the prompt requires context (like logs, code, or metrics), the MCP Server queries an external API or data source.

  4. External API returns current data
    The external system sends back the requested information to the MCP Server.

  5. MCP Server generates a dynamic prompt
    Using the fetched data, the MCP Server builds a formatted prompt message that includes real-time context.

  6. Client adds the prompt to the AI model’s context
    The client injects this dynamic prompt into the model’s input so the AI can reason with updated information.

  7. AI model produces the final response
    The client displays the AI’s answer to the user.

How Tool Works in MCP :

  • User asks: “Calculate the sum of 100 and 50.”
  • Client sends the request to the MCP Server.
  • AI Model decides which tool to use (e.g., calculator_tool).
  • MCP Server invokes the tool and interacts with the External System if needed(if the tool is not available locally-internal).
  • Tool performs the calculation and returns the result.
  • AI Model generates the final response: “The sum is 150.”

How Resource Works in MCP :

Step 1: MCP Server exposes resources

The MCP Server acts as the central hub. It makes different types of resources available to the AI application. These resources are structured pieces of data or services that the model can use.

Step 2: Resource types and their roles

The diagram shows four common resource categories:

  1. RAG System (Build embeddings)

    • The server provides access to a Retrieval-Augmented Generation (RAG) system.
    • This resource helps the AI build embeddings and retrieve relevant context from large datasets.
  2. Cache Layer (Store frequently used data)

    • A resource that stores commonly accessed data for quick retrieval.
    • This improves performance and reduces repeated calls to external systems.
  3. Analytics (Transform & analyze)

    • A resource that processes raw data into insights.
    • For example, analyzing logs or metrics before sending them to the model.
  4. Integration (Combine multiple sources)

    • A resource that aggregates data from different APIs or databases.
    • This gives the AI a unified view of information from multiple systems.

Step 3: How the AI uses these resources

  • When the AI needs context (e.g., logs, historical data, or combined insights), the MCP Client requests these resources from the MCP Server.
  • The server fetches or generates the resource and returns it in a structured format.
  • The AI then uses this resource to improve its reasoning and generate accurate responses.

----------------------------------------

How MCP works

  1. User request arrives at the LLM. The host application passes the user’s intent to the model and supplies the catalog of available MCP tools and resources from connected servers.
  2. MCP client maintains connections. The host spins up one MCP client per server, performs initialization, and negotiates capabilities.
  3. Tool selection and invocation. The LLM chooses a tool based on descriptions and schemas, then asks the client to call it with structured parameters. 
  4. Server executes and returns results. The MCP server performs the action or fetches data and returns structured output via JSON‑RPC. 
  5. LLM composes the final answer. The model uses results to respond or to continue a multi‑step workflow, optionally calling more tools until the task is complete. 
  6. Optional authorization. If the server requires auth, the client follows the specified OAuth flow and receives scoped tokens before tool calls

Example : How MCP connects an LLM to flight booking tools

High level

  • LLM receives the user request.
  • MCP Client brokers tool calls.
  • MCP Servers expose tools and data over the Model Context Protocol.
  • Results flow back to the LLM, which composes the final answer for the user.

Typical servers for air booking

  • Flight Search Server - route availability, schedules, fares
  • Pricing Server - fare rules, taxes, ancillaries
  • Booking Server - PNR creation, seat selection
  • Payment Server - tokenize card, 3DS, capture
  • Loyalty Server - miles accrual, status rules
  • Notifications Server - email or SMS itinerary
  • Calendar Server - add travel to calendar
  • Data Store Server - cache, user profile, past trips

Step by step booking flow

  1. User: “Book BLR to SFO next Friday, return Tuesday, aisle seat, use miles if cheaper.”
  2. LLM interprets intent and constraints.
  3. MCP Client orchestrates calls:
    • Flight Search Server: search BLR ↔ SFO, date constraints
    • Pricing Server: evaluate fares, fare families, baggage, refundability
    • Loyalty Server: compare miles redemption vs cash
  4. LLM ranks options and asks user to confirm.
  5. On confirmation:
    • Booking Server: create PNR, select seats
    • Payment Server: charge or redeem miles
    • Notifications Server: send ticket and receipt
    • Calendar Server: add flights to calendar
  6. MCP Client returns structured results to LLM.
  7. LLM produces the final answer with itinerary details.

Why MCP works well  in this scenerio 

  • Standard protocol for tool discovery and capabilities
  • Secure, isolated tool execution with clear inputs and outputs
  • Composable servers so you can add or swap providers without changing the LLM logic

Why MCP is Required for Scaling LLMs

When LLMs grow in size and capability, they need to interact with more tools, data sources, and systems. Without a standard protocol, every integration becomes a custom job, which is hard to maintain and slows down scaling. MCP solves this by:

  • Standardizing communication: Instead of building one-off connectors for each tool, MCP provides a universal protocol (JSON-RPC) for all tools.
  • Dynamic capability discovery: LLMs can automatically learn what tools are available and what they can do, without hardcoding.
  • Secure and controlled access: OAuth-based authorization ensures least-privilege access, which is critical when scaling across enterprise environments.
  • Local and remote flexibility: MCP supports both local tools (via stdio) and remote services (via HTTP/SSE), making it easy to scale from desktop to cloud.

How MCP Enables Scaling

  • Plug-and-play architecture: Add new servers without changing the LLM logic.
  • Reduced context overhead: Instead of dumping thousands of tool definitions into the model’s prompt, MCP lets the client manage them efficiently.
  • Ecosystem growth: As more MCP servers are built, LLMs can instantly leverage them—accelerating feature expansion.

Why MCP is better than ad‑hoc tool wrappers

  • One protocol. Fewer integrations. MCP reduces the N×M mess of per‑service connectors to a single standard that any client can speak and any server can implement. 
  • Clear capability discovery. Clients list server tools and resources using uniform schema so the LLM can reason about what to call and with which parameters. 
  • Vendor‑neutral ecosystem. MCP is open, with SDKs and many reference servers, and is used by multiple apps, which avoids lock‑in and speeds reuse.
  • Secure by design. Standardized auth and transport guidance, plus emerging enterprise controls that monitor MCP traffic and enforce least‑privilege access.
  • Operational efficiency. New techniques like MCP code‑execution patterns reduce token overhead compared to dumping thousands of tool definitions into context.
  • Usable locally. Desktop extensions package local MCP servers for one‑click install, which makes private data integrations accessible without complex setup

How MCP Integrates with AWS and Amazon Bedrock

MCP enhances AWS services by enabling secure, context-aware AI applications that can access organizational data and tools. A key integration point is Amazon Bedrock, which provides a robust foundation for enterprise AI.

Amazon Bedrock and Language Models

Amazon Bedrock is AWS’s fully managed service for foundation models (FMs), offering a unified API to leading models such as:

  • Anthropic Claude
  • Meta Llama
  • Amazon Titan and Amazon Nova

Bedrock stands out for enterprise use because it leverages AWS’s security and compliance ecosystem, including IAM for access controland CloudWatch for monitoring.

The Converse API and Tool Use

At the core of Bedrock’s flexibility is the Converse API, which supports multi-turn conversations and introduces “tool use.” This allows models to:

  • Detect when external data is needed
  • Request that data via structured function calls
  • Incorporate the retrieved information into responses

MCP with  Bedrock: Seamless Integration

MCP’s standardized protocol for accessing external systems aligns perfectly with Bedrock’s tool-use capability. Together, they create an architecture where Bedrock models can request data through the Converse API, and MCP fulfills those requests securely and efficiently.

NOTE: Amazon Titan and Amazon Nova are AWS-developed foundation model (FM) families available through Amazon Bedrock, each designed for specific AI use cases and enterprise needs.
  • Titan is ideal for traditional text and embedding uses, focusing on enterprise readiness.
  • Nova is designed for advanced, multimodal reasoning with high performance and customization flexibility.

  • Conclusion :  MCP is still new, but the idea behind it is powerful. By creating a shared protocol for context, it lowers the barrier for innovation. Developers can focus on what their AI does, not how it connects. Companies can build products that play well with others instead of locking users into closed systems. The goal is not just smarter AI, but simpler AI. AI that understands what’s happening around it, reacts in real time, and works naturally with the tools you already use. Model Context Protocol is a big step toward that future. It’s the bridge between intelligence and context, and it’s what will make tomorrow’s AI systems faster, more reliable, and far more human in how they understand the world.

    If your roadmap includes broader agent capabilities, more systems, and stronger guardrails, MCP is the foundation that turns one‑off integrations into an extensible platform. It gives LLMs a clear, secure, and efficient way to access the real world, which is exactly what you need to scale from prototypes to production

    No comments:

    Post a Comment