.png)
APIs power modern software, but AI tools can't use them natively. Your OpenAPI spec describes endpoints and data, yet it lacks the execution context that LLMs need to reason and act. Teams end up building plugins, prompt chains, or brittle wrappers just to connect APIs with AI, and every new model or agent means more glue code.
The solution: generate an MCP server from your OpenAPI spec. The Model Context Protocol (MCP) turns your API definition into a structured, executable interface that any AI agent, Claude, ChatGPT, Copilot, Cursor, or LangChain, can discover and call directly, without custom integration.
This guide walks through the full process: from choosing the right generator, to configuring authentication, to deploying your MCP server in production. Whether you want a one-click conversion or full manual control, you'll find the approach that fits your stack.
What is MCP (Model Context Protocol)?
MCP (Model Context Protocol) is an open standard that lets AI models call external tools and APIs through a structured, discoverable interface. Think of it as a universal adapter between LLMs and your backend services.
Without MCP, connecting an AI agent to an API requires custom glue code, prompt engineering, function-calling wrappers, and brittle integrations that break when the API changes. MCP replaces all of that with a single protocol that handles:
- Tool discovery: Agents automatically learn what operations are available
- Schema validation: Inputs and outputs are type-checked before execution
- Transport flexibility: Works over stdio (local), HTTP, or server-sent events (remote)
- Authentication forwarding: Passes OAuth tokens, API keys, or JWTs securely
MCP servers are now supported by Claude, ChatGPT, GitHub Copilot, Cursor, VS Code, and most major AI agent frameworks. The protocol was open-sourced by Anthropic in late 2024 and has since become the de facto standard for agentic tool use.
Why convert OpenAPI to MCP? Your OpenAPI spec already describes your API's endpoints, parameters, and schemas. An MCP server wraps that same information in a format AI agents can execute, turning documentation into action.
What is OpenAPI Specification (OAS)?
The OpenAPI Specification (OAS) is a language-independent standard for defining HTTP APIs in a format that is both machine-readable and human-friendly. It defines the endpoints, request parameters, responses, authentication means, and data formats in a structured format (e.g., JSON or YAML).
Overview of the MCP server and its role in AI/LLM integrations
An MCP server implements the Model Context Protocol that allows AI systems to call backend functions, access data, and perform actions via a standardized interface. It handles discovery of tools, execution of functions, and transmission of structured results over transports such as HTTP or stdio.
Key roles in AI and LLM integrations:
- Unified interface: Solves the “N×M problem” by replacing custom model–tool integrations with a single standardized layer.
- Transport flexibility: Uses JSON-RPC over stdio, HTTP, or server-sent events depending on local or remote setups.
- Tool discovery: Allows agents to discover functions and expose them as callable tools.
- Structured execution: Ensures predictable input/output handling, error reporting, and schema validation.
- Scalability: Reduces context overhead, avoids duplicated work, and enables multi-step, cross-tool workflows.
Prerequisites: What You Need Before Converting OpenAPI to MCP
Before converting an OpenAPI spec into an MCP server, you need a few technical foundations in place. It includes schema compatibility, environment setup, and a working understanding of how authentication maps across tools. The following are key prerequisites to get started:
Tools and libraries required
Setting up the right environment is the first step in preparing OpenAPI specs for MCP integration. These tools ensure the spec is readable, compatible with supported transports, and ready to serve structured responses in a machine-friendly format.
Core MCP server generators
- openapi-mcp-generator (TypeScript/Node.js): Converts OpenAPI 3.x specs into MCP servers with typed Zod validation. Supports stdio and HTTP transports, plus a CLI for proxying and testing.
- openapi-to-mcpserver (Higress, Go): Generates MCP server configuration files from OpenAPI specs written in JSON or YAML. Supports flags like --validate, --server-name, and --output, and is designed for Go environments.
- openapi-mcp (jedisct1, Go): Parses OpenAPI definitions and exposes them over MCP-compatible transports such as HTTP and stdio. Works well for CLI applications or embedded runtimes.
- mcpgen (lyeslabs, Go): Scaffolds MCP servers by generating handler templates, tool definitions, and registration logic based on OpenAPI input.
Validation and schema tools
- openapi-spec-validator (Python): Checks OpenAPI 3.x files for structural issues. Includes missing operationIds, broken references, and unsupported schema features.
- Swagger CLI / Redocly CLI: Help lint and bundle OpenAPI specs before conversion. Ensure spec readiness before passing it into a generator.
Runtime and language environments
- Node.js (v18+): Required for tools like openapi-mcp-generator, especially those using Zod schemas and TypeScript runtimes.
- Go (v1.20+): Required to run tools such as openapi-to-mcpserver, openapi-mcp, and mcpgen. Most repositories provide prebuilt binaries or support go install for direct CLI access.
Knowledge of OpenAPI, MCP, and authentication basics
A successful conversion depends on more than just tooling. You’ll need a clear understanding of how OpenAPI defines API structure, how MCP executes functions, and how authentication maps across both layers.
Here’s what you need to know to approach this correctly:
OpenAPI
Each endpoint should include a unique operationId, typed parameters, and response schemas tied to status codes. OpenAPI 3.1 supports newer JSON Schema features, but not all generators handle them. Check generator compatibility if your spec uses oneOf, anyOf, or const.
Validate components to avoid broken references, circular schemas, or vague parameter types. Adding summaries, descriptions, and examples improves conversion outcomes and tool usability.
MCP
MCP servers expose a manifest (mostly JSON) that lists available tools, their input/output schemas, and runtime behavior. Metadata such as chainable or side_effects helps LLMs plan calls safely and efficiently.
Agents use this manifest to reason over available operations, evaluate compatibility, and sequence multi-step actions. Schema validation uses JSON Schema and Zod is specific to TypeScript implementations.
{
"name": "createInvoice",
"input_schema": { "$ref": "#/components/schemas/InvoiceRequest" },
"output_schema": { "$ref": "#/components/schemas/InvoiceResponse" },
"chainable": true,
"side_effects": false
}
Authentication
Define securitySchemes (API keys, OAuth2, JWT) within your OpenAPI spec. Many tools pass tokens through headers such as Authorization: Bearer <token>. MCP servers must validate tokens or forward them downstream, and credentials should be injected securely (via environment variables, secret managers, or vaults) rather than hardcoded.
How to Generate an MCP Server from an OpenAPI Spec (Step-by-Step)
Converting an OpenAPI spec into a functional MCP server requires a structured setup, tool configuration, and runtime validation. Each stage builds toward generating a tool-aware, protocol-compliant server that integrates with real-world LLM workflows.
Here’s how to convert OpenAPI specs into an MCP server:
Step 1: Project setup and folder structure
Start by setting up your project directory if you're using Node.js-based generator like openapi-mcp-generator, most of the structure will be scaffolded automatically. Go-based generators like mcpgen follow Go’s conventions, and for Python frameworks such as fastmcp use FastAPI-style layout.
A typical Node.js MCP project includes:
- mcp-server/: Routing logic, scopes, and resource handlers
- cli/: CLI tools for mocking, schema bundling, and testing
- types/, models/: TypeScript definitions and schema bindings
- shared.ts, tools.ts: Utility functions used across transports
Step 2: Install and configure dependencies
Install the generator CLI for your chosen runtime.
For Node.js/TypeScript (openapi-mcp-generator):
npm install --save-dev openapi-mcp-generator
Add a script in package.json for repeatable builds:
"scripts": { "generate:mcp": "openapi-mcp-generator --input openapi.yaml --output src/" }
For Go (mcpgen):
go install github.com/lyeslabs/mcpgen@latest
Use .env files to store base URLs and tokens during local runs. Most generators also support config files (mcp.config.ts, .mcpgenrc) to keep options consistent across environments. Pin versions in package.json or go.mod to avoid unexpected changes as MCP tooling evolves.
Step 3: Define authentication and environment variables
MCP generators support API keys, bearer tokens, and OAuth2 flows. These are defined in your OpenAPI spec through securitySchemes and linked to each operation’s security requirements.
Example (API key + JWT bearer) (Speakeasy):
components:
# The definition of the used security schemes
securitySchemes:
APIKey:
type: apiKey
in: header
name: X-API-Key
Bearer:
type: http
scheme: bearer
bearerFormat: JWT
security:
- APIKey: []
- Bearer: []
Generators map these schemes to environment variables (e.g., X_API_KEY, BEARER_TOKEN). At runtime, you inject values via .env, shell exports, or secret managers such as Docker Secrets.
For example, openapi-mcp-generator supports token injection using .env files but token handling may differ across generators (not all automatically map).
Step 4: Generate MCP Server from OpenAPI Spec
Before generation, validate your OpenAPI spec with Redocly CLI or Swagger CLI to avoid missing operationId values or broken references. A clean spec prevents runtime errors.
Node.js / TypeScript
For example, using Speakeasy’s openapi-mcp-generator:
npx openapi-mcp-generator generate \
--input openapi.yaml \
--output ./mcp-server \
--name "custom-mcp-server"
It scaffolds typed Zod schemas, transport bindings (HTTP, stdio, streaming), and request handlers.
For Go-based tools like mcpgen, the command might look like:
go run github.com/lyeslabs/mcpgen \
--spec openapi.yaml \
--out ./generated-server
openapi-to-mcpserver (Higress): Generates config-driven MCP servers for cloud-native runtimes.
Each generator produces a manifest, schemas, and handlers mapped to your API operations. Use flags like --proxy (Node.js) or --validate (Go/Higress) to simulate request routing before moving on to testing.
Step 5: Test and validate the MCP server
Once generated, your MCP server must be tested to confirm that the spec was converted correctly. Start with CLI tools.
When using openapi-mcp, run commands like:
openapi-mcp validate ./openapi.yaml
openapi-mcp lint ./openapi.yaml --rules=minimal-metadata
These checks catch missing operationId values, broken $ref, and unsupported schema features.
Next, run regression tests by calling each generated tool with valid and invalid inputs. Confirm that error responses are structured and match your OpenAPI definitions.
For deeper validation, register the server in an agent environment such as Claude Desktop or Cursor. It verifies tool discovery and runtime execution in real-world workflows.
In production, log metrics include response time, error rates, and schema mismatches. Tracking these trends helps detect drift early and ensures MCP servers remain reliable as your API evolves.
Step 6: Deploying the MCP server (local & cloud options)
Once your server is generated and tested, you can deploy it locally for development or scale it into production on cloud runtimes.
Local deployment (Python / FastMCP + FastAPI):
- FastMCP + FastAPI (Python):
It exposes your OpenAPI-defined tools on /mcp, ready for agents like Claude or Cursor.
Cloud deployment
- Go / Higress: openapi-to-mcpserver generates configs deployable to cloud-native gateways, making MCP available behind Kubernetes or service meshes.
- Node.js: Run MCP servers in Docker or on serverless platforms like Cloudflare Workers.
Advanced MCP Server Configuration: Filters, Auth, and Transport Modes
Beyond basic setup, MCP servers support advanced features that improve control, performance, and integration flexibility. You can inject custom logic, apply strict authentication, and switch transport modes depending on runtime needs, tool design, or deployment environment.
The following features expand your server’s configuration scope:
1. Filtering and customizing endpoints
Tools like openapi‑mcp‑generator support an x‑mcp vendor extension to include or exclude endpoints per operation, path, or root level. FastMCP allows tag-based filtering and custom route maps to rename, disable, or proxy endpoints before runtime.
2. Adding authentication layers (OAuth, API Keys, JWT)
Use securitySchemes in your OpenAPI spec to define OAuth 2.0, JWT bearer tokens, or API keys. Generators like openapi-mcp-generator map these to MCP middleware hooks for runtime checks.
3. Transport modes: stdio, web server, streamable HTTP
Each transport mode defines how the MCP server communicates with clients. Your choice impacts local testing, browser integrations, and cloud compatibility.
- Stdio (Default): Uses standard input/output streams. Ideal for local development or integrating MCP with agents and CLI tools.
- Web Server with SSE: Starts an HTTP server using Hono. Enables REST input, SSE output, multiple connections, and a built-in browser test UI.
- StreamableHTTP: Implements stateful JSON-RPC over HTTP POST. Supports session headers, error handling, and correct status codes by default.
4. Debugging and monitoring MCP servers
Enable --verbose to trace server activity. Log request-response payloads for schema validation. Use the built-in test UI for live debugging. For advanced monitoring, integrate tools like Prometheus or OpenTelemetry to capture metrics, logs, and traces across production MCP deployments.
Integrating Your MCP Server with AI Agents and Developer Tools
MCP servers act as bridges between APIs and intelligent development environments. They let code editors, copilots, and enterprise tools securely execute functions, improving automation, productivity, and control across both local and large-scale deployments.
Here are the main integrations with editors, agents, and industries:
Using MCP servers with AI code editors (Cursor, VS Code, Copilot)
MCP servers extend code editors by exposing APIs as callable tools. In Cursor or VS Code, developers can run tasks like file operations or schema validation directly through MCP. GitHub Copilot (via MCP support) integrates these servers to provide context-aware completions and automate routine coding workflows.
AI agent and LLM integrations (ChatGPT, Claude, Perplexity)
MCP servers provide structured interfaces that AI agents can call directly. ChatGPT, Claude, and Perplexity use these servers to run queries, fetch data, and perform actions. It ensures reliability, enforces schema validation, and reduces integration errors compared to prompt-based methods.
Example use cases for regulated industries (finance, healthcare)
- MCP frameworks such as HMCP (Healthcare MCP) connect AI agents to FHIR APIs for claims and patient data. They enforce HIPAA compliance through encryption, logging, and secure audit trails.
- MCP enables agents to integrate with compliance platforms for transaction checks and reporting. Financial institutions use it to meet AML and PCI DSS requirements reliably.
Best Practices for OpenAPI-to-MCP Conversion
MCP servers need structured practices to stay secure, performant, and maintainable. Addressing security risks, tuning execution speed, and managing versions keeps servers reliable in production and aligned with enterprise compliance requirements.
The following practices help optimize MCP server deployments:
Security considerations
MCP servers must handle sensitive operations safely. Use OAuth2 or API keys for access control, encrypt all traffic, and configure detailed audit logs. Regular security reviews and compliance checks prevent breaches and maintain trust in regulated environments.
Performance optimization
Efficient MCP servers reduce latency and resource consumption. Cache frequent responses, streamline schema validation, and use lightweight transports. Monitor response times and optimize concurrency to ensure predictable performance across both local developer setups and high-volume enterprise deployments.
Versioning and maintaining MCP servers
Consistent versioning avoids compatibility issues between clients and tools. Use semantic versioning for API changes, document revisions clearly, and maintain backward compatibility when possible. Regular updates and automated tests ensure MCP servers stay stable as specs and integrations evolve.
Common Mistakes When Converting OpenAPI to MCP
Even with good tooling, these pitfalls trip up teams regularly:
1. Missing or duplicate operationId values
Every OpenAPI operation needs a unique operationId. MCP generators use this as the tool name. If it's missing, the generator either fails silently or auto-generates names like get_api_v1_users_id that confuse LLMs. Before converting, run a linter:
redocly lint openapi.yaml --rule operation-operationId-unique
2. Exposing too many endpoints
APIs with 100+ operations create an overwhelming tool list. LLMs perform worse when they have to choose from dozens of similar tools. Curate your endpoints, expose only the operations agents actually need. Use x-mcp vendor extensions or tag-based filtering to exclude admin, internal, or deprecated routes.
3. Vague or missing descriptions
LLMs rely on summary and description fields to understand what a tool does. An endpoint described as "Get data" tells the model nothing. Write descriptions that explain what the operation does, when to use it, and what it returns:
summary: List active invoices for a customer
description: >
1. Returns all invoices with status "active" for the given customer ID.
2. Use this when the user asks about outstanding or unpaid invoices.
3. Results are paginated, use the `offset` parameter for additional pages.
4. Ignoring authentication mapping
Your OpenAPI spec may define securitySchemes, but not all generators map them automatically. Verify that tokens are actually forwarded at runtime, not just declared in the spec. Test with an expired token to confirm the MCP server returns a proper 401, not a silent failure.
5. Not testing with a real AI agent
Unit tests validate schemas. But MCP servers are consumed by AI agents, not humans. Always test with an actual client, Claude Desktop, Cursor, or a LangChain agent, to verify that:
- Tool discovery returns clean, descriptive names
- Input schemas are parseable by the model
- Error responses are structured (not raw stack traces)
6. Skipping spec validation before generation
A spec that renders fine in Swagger UI can still have broken $ref pointers, circular schemas, or unsupported constructs. Always validate first:
npx @redocly/cli lint openapi.yaml
# or
swagger-cli validate openapi.yaml
Best Tools to Generate MCP from OpenAPI (Comparison)
Tools and libraries help convert OpenAPI specifications into MCP servers efficiently. They automate scaffolding, validation, and configuration. It gives developers faster paths to stable deployments without manually coding every handler or schema binding. Here are the leading tools supporting MCP conversion:
Stainless
Stainless converts OpenAPI specifications into MCP servers built in TypeScript. It offers endpoint filtering and schema validation to keep implementations consistent. Teams already using Stainless SDKs often adopt it for reliable type safety and streamlined workflows.
Speakeasy
Speakeasy maintains the openapi-mcp-generator for Node.js developers. It generates servers with typed Zod validation and multiple transport options for flexibility. A built-in CLI supports testing, schema proxying, and quick customization of the generated server.
Higress
Higress provides the openapi-to-mcpserver utility for Go-based setups. It translates OpenAPI YAML or JSON into MCP configuration with validation and naming flags. This makes it useful for cloud-native deployments where strong consistency is required.
FastMCP
FastMCP extends FastAPI applications into MCP-compatible servers. It supports stdio, web server, and StreamableHTTP transports depending on runtime requirements. Python teams use it to integrate AI agents quickly without modifying core application logic.
GitHub generators
Community projects like mcpgen and openapi-mcp offer flexible scaffolding. They create handlers, schema bindings, and transport logic directly from OpenAPI specs. Developers often expand these outputs to meet enterprise or industry-specific needs.
Turn Any API into an MCP Server in One Click with DigitalAPI
When you have thousands of APIs, converting each of them to MCP can be time-consuming and expensive. But what if it didn’t have to be a manual effort? Sounds amazing, right? Well, with DigitalAPI’s service hub, you can select any API in your library and convert it to an MCP with one click in less than a minute.
At the same time, these APIs will be ready to be used by API-GPT, our AI agent built on your APIs. It will allow you to perform any task, fetch information, automate actions, and much more with a simple natural language prompt.
FAQs
1. What does it mean to generate an MCP server from OpenAPI?
It means taking your existing OpenAPI specification, which describes your API's endpoints, parameters, and schemas, and converting it into an MCP server that AI agents can call directly. The MCP server acts as a bridge: it reads the spec, registers each operation as a "tool," and handles execution, validation, and response formatting over a standardized protocol.
2. Can I generate an MCP server without writing code?
Yes. DigitalAPI lets you upload an OpenAPI spec and generate a fully configured MCP server with one click, no coding required. Other tools like Stainless also offer low-code generation paths. For teams that want manual control, CLI-based generators (Speakeasy, Higress, FastMCP) require some configuration but handle most of the scaffolding automatically.
3. What's the difference between MCP and a REST API?
A REST API exposes endpoints that humans (or code) call with HTTP requests. An MCP server wraps those same operations in a protocol that AI models understand natively, including tool discovery, schema validation, and structured error handling. MCP doesn't replace your API; it makes your API callable by AI agents without custom integration code.
4. Which AI assistants and tools support MCP?
As of 2026, MCP is supported by Claude (Anthropic), ChatGPT (OpenAI), GitHub Copilot, Cursor, VS Code (via extensions), Windsurf, and most agent frameworks including LangChain, LangGraph, CrewAI, and AutoGen. The ecosystem is growing rapidly.
5. Can I run an MCP server locally?
Yes. Most generators default to stdio transport, which runs entirely on your machine. This is the fastest way to test and debug. You can also run HTTP or SSE transports locally for browser-based testing before deploying to production.
6. How do I deploy an MCP server to production?
Options include:
- Docker containers: works with any cloud provider
- Cloudflare Workers: Stainless supports direct deployment
- Serverless platforms: Vercel (with @vercel/mcp-adapter), AWS Lambda
- Kubernetes: Higress generates cloud-native configs
- DigitalAPI: managed hosting with one-click deployment
7. What tools support OpenAPI-to-MCP conversion?
The leading tools are DigitalAPI (one-click, any language), Stainless (TypeScript), Speakeasy (TypeScript), FastMCP (Python), Higress (Go), and several open-source generators on GitHub. See the comparison table above for a detailed breakdown.
8. How secure is an MCP server for enterprise use?
MCP servers can be configured with OAuth2, JWT, API keys, TLS encryption, and audit logging. For regulated industries, you can add HIPAA-compliant logging, SOC 2 controls, or PCI DSS validation layers. Security depends on your configuration, the protocol itself is transport-agnostic and supports encrypted channels.
9. Does MCP work with GraphQL or gRPC, or only REST/OpenAPI?
MCP is protocol-agnostic at the tool level, you can build MCP servers that call GraphQL or gRPC backends. However, automatic generation from specs is most mature for OpenAPI. GraphQL and gRPC support is emerging (Specmatic Genie supports gRPC, for example), but most generators today focus on OpenAPI 3.x.
10. What happens when my API changes?
Re-run the generator against your updated OpenAPI spec. Most tools support regeneration without losing custom configuration. Use semantic versioning for your MCP server and test with downstream agents after each update. DigitalAPI handles this automatically, spec changes trigger MCP server regeneration.
11. Is MCP the same as function calling?
No. Function calling (as in OpenAI's or Anthropic's APIs) is a model-level feature where the LLM outputs structured function calls. MCP is a server-side protocol that standardizes how those function calls are discovered, routed, executed, and responded to. MCP works with function calling, the model uses function calling to invoke MCP tools.
12. Can I use MCP for non-AI use cases?
Yes. While MCP was designed for AI agent integrations, the protocol works anywhere you need standardized, discoverable, schema-validated tool execution. Enterprise integration platforms, workflow automation, and internal tooling dashboards can all benefit from MCP's structured interface.




.avif)
