What is a Model Context Protocol Server?
A Model Context Protocol (MCP) server is a standardized way to extend AI assistants with custom functionality. Instead of building separate plugins for each AI platform, MCPs provide a universal interface that works across different AI tools like Claude, ChatGPT, and others.
Why Create Custom MCPs?
Custom MCPs allow you to integrate your specific tools, databases, APIs, and workflows directly into AI assistants, making them more powerful and tailored to your needs.
Prerequisites and Setup
Before we start creating your custom MCP server, make sure you have:
- Node.js 18+ or Python 3.8+ installed
- Basic knowledge of TypeScript/JavaScript or Python
- An AI assistant that supports MCPs (Claude Desktop, VS Code, Cursor, Windsurf, etc.)
- A code editor (VS Code recommended)
Creating an MCP Server with TypeScript
Let's create a simple MCP server that provides weather information. This example will demonstrate the core concepts you need to build any custom MCP.
// package.json { "name": "weather-mcp-server", "version": "1.0.0", "type": "module", "dependencies": { "@modelcontextprotocol/sdk": "^2.1.0", "axios": "^1.7.7" }, "scripts": { "start": "node dist/index.js", "build": "tsc", "dev": "tsx src/index.ts" }, "devDependencies": { "tsx": "^4.7.0", "typescript": "^5.3.0" } }
// src/index.ts import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; import axios from 'axios'; class WeatherMCPServer { private server: Server; constructor() { this.server = new Server( { name: 'weather-mcp-server', version: '1.0.0', }, { capabilities: { tools: {}, }, } ); this.setupToolHandlers(); } private setupToolHandlers() { // Register the get_weather tool this.server.setRequestHandler('tools/list', async () => { return { tools: [ { name: 'get_weather', description: 'Get current weather for a city', inputSchema: { type: 'object', properties: { city: { type: 'string', description: 'City name', }, }, required: ['city'], }, }, ], }; }); // Handle tool calls this.server.setRequestHandler('tools/call', async (request) => { const { name, arguments: args } = request.params; if (name === 'get_weather') { const { city } = args as { city: string }; try { // Replace with your weather API const response = await axios.get( `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=YOUR_API_KEY&units=metric` ); const weather = response.data; return { content: [ { type: 'text', text: `Weather in ${city}: ${weather.main.temp}°C, ${weather.weather[0].description}`, }, ], }; } catch (error) { return { content: [ { type: 'text', text: `Error getting weather for ${city}: ${error.message}`, }, ], isError: true, }; } } throw new Error(`Unknown tool: ${name}`); }); } async run() { const transport = new StdioServerTransport(); await this.server.connect(transport); } } const server = new WeatherMCPServer(); server.run().catch(console.error);
Creating an MCP Server with Python
Here's the same weather MCP server implemented in Python using the MCP SDK:
# requirements.txt mcp>=2.1.0 requests>=2.32.3 pydantic>=2.8.0
# weather_mcp_server.py import asyncio import json import requests from mcp.server import Server from mcp.server.stdio import stdio_server from mcp.types import Tool, TextContent app = Server("weather-mcp-server") @app.list_tools() async def list_tools() -> list[Tool]: return [ Tool( name="get_weather", description="Get current weather for a city", inputSchema={ "type": "object", "properties": { "city": { "type": "string", "description": "City name" } }, "required": ["city"] } ) ] @app.call_tool() async def call_tool(name: str, arguments: dict) -> list[TextContent]: if name == "get_weather": city = arguments.get("city") try: # Replace with your weather API key response = requests.get( f"https://api.openweathermap.org/data/2.5/weather?q={city}&appid=YOUR_API_KEY&units=metric" ) response.raise_for_status() weather = response.json() return [ TextContent( type="text", text=f"Weather in {city}: {weather['main']['temp']}°C, {weather['weather'][0]['description']}" ) ] except Exception as e: return [ TextContent( type="text", text=f"Error getting weather for {city}: {str(e)}" ) ] raise ValueError(f"Unknown tool: {name}") async def main(): async with stdio_server() as streams: await app.run(streams[0], streams[1]) if __name__ == "__main__": asyncio.run(main())
Testing Your Custom MCP Server
Once you've built your MCP server, you need to test it to ensure it works correctly:
- Build your server: Compile TypeScript or ensure Python dependencies are installed
- Configure your AI assistant: Add your MCP server to your AI tool's configuration
- Test the connection: Verify the MCP server starts without errors
- Test functionality: Try calling your custom tools through the AI assistant
Deployment and Distribution
To share your custom MCP server with others, you can:
- Publish to npm (for TypeScript/JavaScript MCPs) with @modelcontextprotocol scope
- Publish to PyPI (for Python MCPs) with proper MCP metadata
- Create a GitHub repository with installation instructions and MCP manifest
- Submit to the official MCP servers repository on GitHub
- List in the MCP Directory for broader community discovery
- Use the MCP installer tools for easy distribution
Best Practices and Tips
✅ Do:
- • Provide clear, descriptive tool names and descriptions
- • Include comprehensive input validation
- • Handle errors gracefully with helpful messages
- • Document your MCP server thoroughly
- • Test with multiple AI assistants
❌ Don't:
- • Expose sensitive information in error messages
- • Make blocking calls without proper timeout handling
- • Ignore input validation and sanitization
- • Create overly complex tool interfaces
- • Forget to handle edge cases and errors