Need to Separate MCP Discovery Registry from Package Distribution
The MCP discovery registry (which servers are approved) and the package registry (where the binary actually lives) are two different systems with different governance requirements.
Take Control Of Your Signals — Become a Naftiko Design Partner Today!
Persona Story:
Maya, the developer experience & AI engineering lead, has learned the hard way that the MCP registry is not the package registry. The MCP registry is an allow-list — metadata about which servers are approved. The package registry is where the actual binary gets pulled from. Conflating the two leads to governance that works for neither discovery nor distribution.
Problem Context
- Discovery and distribution are answering different questions — “should I use this server” vs “where does the binary come from”
- Enterprises already have internal package registries (JFrog, Artifactory, Nexus) with existing vulnerability scanning, signing, and auth
- Pushing MCP package distribution into the discovery registry duplicates existing package-registry capability
- Pushing discovery metadata into the package registry loses the allow-list semantics that make IDE surfaces safe
- Developers need both pieces to work together: a filtered list in the IDE and a trusted source for the bits
Problem Impact
- Enterprises reinvent package-registry features inside their MCP registry, or skip package governance entirely
- Vulnerability scanning and signing from existing package infrastructure don’t flow through to MCP installs
- Confusing architectures where a single system tries to be both catalog and CDN
- IDE installs pull from the public internet by default because no one separated the two concerns
- Security teams cannot cleanly assign ownership — is the MCP team responsible, or the package-registry team?
Naftiko Today
- Executable YAML capability specs cleanly separate the capability definition (what the registry holds) from the runtime bits (what the engine executes) — the architectural split this problem calls for
- Docker-native deployment means the Naftiko Engine itself lives in the enterprise’s existing container registry, reusing established distribution governance
- External bindings for secrets and tokens keep credential distribution outside the capability spec, so the registry stays a definition store and nothing more
- HTTP API consumption with auth (Bearer, API key, header injection) lets capabilities point at upstream APIs without needing to bundle packages, reducing overlap between discovery and distribution
Naftiko Tomorrow
- Naftiko Shipyard MVP (Fleet Second Alpha) would provide a discovery-only registry that integrates with — rather than replaces — existing enterprise package registries
- OpenAPI-to-Naftiko import (Second Alpha) would let enterprises convert existing cataloged APIs into Naftiko capabilities without touching how the underlying services are distributed
- JSON Schema Store publication (GA) would establish Naftiko specs as a format that discovery registries can index without assuming anything about distribution
- Fabric capability discovery (v1.1) would federate across multiple discovery registries while leaving package distribution to whatever each enterprise already uses