How We Designed a 62-Tool MCP Server Without Overwhelming AI Agents
When we built the CodeTidy MCP Server, we faced a design tension: more tools means more capability, but also more confusion for AI agents. Every tool registered in an MCP server gets injected into the AI model's context as part of its system prompt. More tools means more tokens consumed, slower responses, and a higher chance the model picks the wrong tool.
The same functionality that powers our 62 MCP tools could easily have been split into 100+ individual tools. Here's how we kept the count low without sacrificing coverage — and why it matters for anyone building MCP servers.
The Problem: Tool Count Is a Tax on AI Performance
MCP clients like Claude Desktop, Cursor, and Windsurf serialize every registered tool — its name, description, and parameter schema — into the model's context window. Each tool might consume 200-500 tokens of context. At 50 tools, that's 10,000-25,000 tokens just for tool definitions, before the user even asks a question.
Worse, more tools means more decision-making overhead. The model must evaluate each tool's description against the user's request. Research on MCP best practices consistently finds that servers with focused, well-described tools outperform those with sprawling tool lists.
Strategy 1: Parameterized Multi-Purpose Tools
Instead of registering one tool per operation, we use parameters to expose multiple capabilities through a single tool. The codetidy_hash_generate tool is a good example:
// One tool handles 5 hash algorithms
server.tool('codetidy_hash_generate', {
input: z.string(),
algorithm: z.enum(['md5', 'sha1', 'sha256', 'sha384', 'sha512']),
uppercase: z.boolean().default(false),
});
Without parameterization, we'd need codetidy_md5_hash, codetidy_sha1_hash, codetidy_sha256_hash, codetidy_sha384_hash, and codetidy_sha512_hash — five tools instead of one.
We apply this pattern across the server:
codetidy_text_case_convert— one tool handles 12 case formats (camelCase, snake_case, PascalCase, kebab-case, SCREAMING_SNAKE, Title Case, etc.) via atargetparameter. That's 12 tools collapsed to 1.codetidy_semver_calculate— parse, compare, bump, and sort operations via anactionparameter. 4 tools → 1.codetidy_uuid_generate— v4 and v7 UUIDs with count, uppercase, and no-dash options. 2+ tools → 1.codetidy_byte_calculate— string byte measurement AND unit conversion via amodeparameter. 2 tools → 1.
The trade-off: parameterized tools have slightly more complex schemas, which means the model needs to infer the right parameter value. In practice, enum parameters (like algorithm: 'sha256') work extremely well — models pick the correct value almost every time because the options are explicit in the schema.
Strategy 2: Pipeline Composition Instead of Combinatorial Tools
Developers frequently chain operations: format JSON, then Base64 encode it. Decode a JWT, then pretty-print the payload. Without a composition mechanism, you'd need dedicated tools for every common combination — format_then_encode, decode_then_format, etc. The combinations grow exponentially.
Our codetidy_pipeline tool solves this by accepting an array of processor IDs:
// Chain any sequence of 30+ processors
server.tool('codetidy_pipeline', {
input: z.string(),
steps: z.array(z.string()).min(1).max(20),
});
// Example: JSON format → Base64 encode
{ input: '{"a":1}', steps: ['json-format', 'base64-encode'] }
One tool replaces hundreds of potential combinations. The model learns quickly that codetidy_pipeline is the go-to for multi-step text transformations, and it constructs the steps array based on what the user needs.
Strategy 3: MCP Annotations Signal Safety
Every CodeTidy tool declares four MCP annotation hints:
const TOOL_ANNOTATIONS = {
readOnlyHint: true, // Never modifies files or state
destructiveHint: false, // Never destroys data
idempotentHint: true, // Same input → same output
openWorldHint: false, // No network calls, no side effects
}; This tells MCP clients that every tool is a pure, deterministic, local transformation. Clients like Claude Desktop can auto-approve these tools without prompting the user for confirmation, dramatically reducing friction. Users install the server once and never see permission dialogs — the tools just work.
Strategy 4: Intentional Exclusion
Not every web tool makes sense as an MCP tool. We apply a simple heuristic: does the tool do something the AI model can't reliably do on its own?
- Include: SHA-256 hashing (models can't compute cryptographic hashes), UUID generation (needs true randomness), regex execution (models guess at matches instead of running the engine), JSON validation (authoritative parsing vs. eyeballing)
- Exclude: Color pickers (visual-only), diff viewers (visual UI), interactive config builders (need mouse interaction)
- Exception: We recently added
codetidy_qr_generatewhich returns PNG images via MCP image content blocks — clients that support image rendering (Claude Desktop) show the QR code inline
This keeps the tool list focused. When an AI agent sees 62 CodeTidy tools, every single one is something the model genuinely benefits from delegating to rather than attempting itself.
Strategy 5: Descriptive Naming and Front-Loaded Descriptions
Tool names follow a strict codetidy_verb_noun pattern: codetidy_json_format, codetidy_hash_generate, codetidy_epoch_convert. The prefix prevents collisions with other MCP servers. The verb-noun pattern helps models match user intent to tool names.
Descriptions front-load the action: "Format/beautify JSON with 2-space indent" rather than "A tool for formatting JSON data into a human-readable format." Models scan descriptions quickly — the first few words matter most.
The Net Result
Without these strategies, the same functionality would require 100+ tools. We deliver it in 62:
- 33 text processors — each is a focused string→string transform
- 7 generators — parameterized for multiple output formats
- 6 converters — including the QR code image generator
- 13 deterministic tools — semver, MIME lookup, math evaluation, etc.
- 2 config tools — cron parsing and chmod calculation
- 1 pipeline — composes any combination of the 33 processors
If you're building your own MCP server, the key takeaway is: treat your tool list like an API surface. Every tool you add has a cost — not in compute, but in the model's ability to make good decisions. Consolidate related operations, compose instead of duplicating, and exclude anything the model can handle on its own.
Try the server yourself: npx @codetidy/mcp. All 62 tools, running locally, zero configuration. Check the MCP setup guide for Claude Desktop, Cursor, and other clients.