MCP-First Design
This use case is concerned with designing capabilities first for MCP clients, then mapping them to underlying APIs—a clean AI-native integration model. Rather than retrofitting existing API designs for AI consumption, MCP-first design starts with the tools, resources, and prompts that AI agents need and works backward to the implementation.
Teams need to define MCP exposure with server-level descriptions and tool-level contracts—name, description, input and output parameters—before wiring to backend systems. MCP transports (HTTP streaming and Standard IO) allow the same capability to run in remote server mode or local IDE and agent mode. Mocking mode enables design and testing before any upstream API exists.
Pain Points
- API-first designs do not translate cleanly to AI-native tool contracts
- Teams retrofit existing API endpoints as MCP tools with poor results
- No way to design and test MCP tools before backend implementation exists
- MCP servers are tightly coupled to specific API implementations
- Same capability cannot run in both remote and local modes
- No separation between the AI-facing interface and the backend wiring
Expected Outcomes
- AI-native tool contracts designed before backend implementation
- Mocking mode enables testing without upstream APIs
- Same capability runs in remote server and local IDE/agent modes
- Clean separation between MCP interface and API implementation
- Tools wired to upstream APIs without changing those APIs
Narrative
An organization has been generating MCP servers from existing OpenAPI specifications, but the results are mediocre. Tools mirror API endpoints one-to-one—agents must understand REST conventions, pagination patterns, and resource relationships that make sense for HTTP clients but not for AI tasks.
The shift begins when teams design MCP exposure first. Server-level descriptions and tool-level contracts define what AI agents need—task-oriented tools with clear names, descriptions, and typed input and output parameters. Mocking mode with static output parameter values enables testing before any upstream API is wired.
Once the AI-native interface is validated, teams wire tools to consumed operations via call or orchestrated steps. The same capability runs in remote server mode (HTTP streaming) for production and local mode (Standard IO) for IDE and agent development. No changes to upstream APIs are required.
The result is a clean separation between what AI agents see and how backend systems work. MCP-first design produces tools that match how agents think about tasks, not how APIs happen to be structured. Teams iterate on the AI interface independently of backend implementation, and the same capability definition serves every deployment mode.