Context Engineering
This use case focuses on providing Model Context Protocol (MCP) servers on top of common private, public/1st party, and 3rd party APIs, as well as local SQL databases, employing a domain-driven, declarative, and governed approach to right-sizing the context windows via MCP while providing integrations across AI copilots and agents.
Teams need a reliable way to deliver MCP servers from internal and third-party APIs without having to discover and learn about each API and the technical details of integration. This use case provides the fundamentals for safely integrating existing data and systems into artificial intelligence copilots and agents.
Pain Points
- Copilot and MCP leadership mandates without clear implementation paths
- Unmanaged encryption across MCP server deployments
- Unmanaged discovery of available MCP servers
- Unmanaged authentication and authorization for AI integrations
- Unmanaged usage tracking and cost across AI services
- Nothing has made it to production—only a handful of low-risk, non-customer-facing projects
- Teams need more context from 3rd-party services and local SQL databases to make AI integrations work
Expected Outcomes
- 3rd-party data accessible within copilots
- 3rd-party MCP servers available and discoverable
- Budget managed across AI integrations
- Risk managed and visible
- Optimized SaaS usage through AI integration visibility
- Increased visibility, discovery, and reusability across MCP servers
Narrative
Organizations that have been successful with API-first transformation—with hundreds of distributed engineers producing OpenAPI-defined HTTP APIs alongside Webhooks, GraphQL, gRPC, and Kafka APIs—face a new mandate: go all-in on AI with a copilot for partners and eventually customers.
Teams spend months rallying to produce compelling prototypes, leveraging centralized API governance efforts already in motion. The challenge grows as they must govern not only third-party AI APIs (Claude, ChatGPT, Gemini, Hugging Face, Ollama, local SLMs/LLMs) but also the third-party APIs delivered to those LLMs.
Teams publish many different solutions using existing OpenAPI and AsyncAPI specifications to generate MCP servers in various programming languages, but without any notion of discovery, consistency, or organized approach to MCP deployment. Results are mixed across the copilots and agents deployed on top of MCP servers. Like with federated API development and API platform investment, more standardization of MCP deployment is needed alongside SDKs, Jupyter Notebooks, and other clients.
The result of exploration is that more context is needed to make AI integrations work—real-world context from third-party services used daily, local SQL databases, and legacy data access. The immediate priority is giving teams guidance on how to consistently and dynamically generate MCP servers from existing OpenAPI and AsyncAPI artifacts through a guided, declarative, and composite set of capabilities mapped to various sources as defined by consumers.
Once MCP server delivery is standardized, the focus shifts to making them discoverable alongside other API resources—encouraging reuse and interoperability, as well as discovery and onboarding.