mcpserversconfigurationadvanced

The Complete Guide to Gemini CLI MCP Servers: Build, Configure, and Debug

Everything you need to know about MCP servers in Gemini CLI. From architecture to building custom servers, configuration, and debugging common issues.

Zhihao MuZhihao Mu
Updated: April 12, 202619 min read

Introduction

The Model Context Protocol — MCP for short — is one of the most consequential additions to the AI developer toolbox in the last two years. It solves a fundamental problem: AI assistants live inside a sandbox. A language model, by itself, cannot read your Jira tickets, query your Postgres database, call your internal REST APIs, or browse a private GitHub repository. It can only work with the text you paste into the chat window.

MCP changes that equation. It defines a standardised, transport-agnostic protocol through which a host application (in our case Gemini CLI) can discover and invoke tools provided by external servers — on your laptop, on your company network, or anywhere on the internet. Each server exposes a catalogue of typed tools, and the model can call those tools mid-conversation, inspect the structured results, and continue reasoning.

The result is an AI assistant that is genuinely connected to your real-world systems. Ask Gemini CLI to "open a GitHub issue for every TODO comment in the codebase" and, with the right MCP server wired in, it will do exactly that — read the files, parse the comments, authenticate against the GitHub API, and create the issues, all in a single conversation.

This guide covers the full lifecycle: understanding the protocol, standing up your first server in under five minutes, wiring multiple servers into Gemini CLI's settings, writing a production-grade custom server in TypeScript, and diagnosing the issues that will inevitably arise when you start pushing things in anger.


TL;DR

  • MCP is an open protocol that lets Gemini CLI call external tools exposed by local or remote servers.
  • A server advertises tools in a JSON schema; Gemini CLI discovers them at startup via the mcpServers key in settings.json.
  • You can run any number of servers simultaneously — filesystem, GitHub, databases, custom APIs — without restarting Gemini CLI.
  • Custom servers are straightforward TypeScript (or Python) programs that respond to tools/list and tools/call JSON-RPC messages.
  • The most common issues are port conflicts, authentication mismatches, and schema validation errors — all of which have clear diagnostic paths.

MCP Architecture Overview

The Client/Server Model

MCP follows a strict client/server split. Gemini CLI is the host. Each external capability is a server. The host is responsible for managing server lifetimes, routing tool calls, and presenting results back to the model. Servers are responsible only for advertising their capabilities and executing calls faithfully.

┌─────────────────────────────────────────────────────┐
│                    Gemini CLI (Host)                 │
│                                                     │
│  ┌───────────────┐     ┌──────────────────────────┐ │
│  │  Gemini Model │────▶│  MCP Client (built-in)   │ │
│  └───────────────┘     └──────────┬───────────────┘ │
│                                   │ JSON-RPC 2.0    │
└───────────────────────────────────┼─────────────────┘
                                    │
          ┌─────────────────────────┼──────────────────┐
          │                         │                  │
          ▼                         ▼                  ▼
   ┌────────────┐           ┌──────────────┐   ┌────────────┐
   │ Filesystem │           │  GitHub MCP  │   │ Custom MCP │
   │   Server   │           │    Server    │   │   Server   │
   └────────────┘           └──────────────┘   └────────────┘

Transport Layers

MCP supports three transports. Understanding which one to use for which scenario saves you a lot of confusion:

| Transport | When to Use | Example | |---|---|---| | stdio | Local processes launched by Gemini CLI itself | Filesystem server, local scripts | | sse | Long-lived HTTP servers on localhost or LAN | Custom API servers, database proxies | | streamable-http | Stateless HTTP servers, cloud-hosted tools | SaaS integrations, remote APIs |

The most common transport for local development is stdio. Gemini CLI spawns the server process, attaches to its stdin/stdout, and communicates directly. No port management needed.

The JSON-RPC Message Format

All MCP communication uses JSON-RPC 2.0. The two message types you need to understand are:

Tool discovery — sent by the host at startup:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list",
  "params": {}
}

Server response — the tool catalogue:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "create_issue",
        "description": "Creates a new GitHub issue in the specified repository.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "owner": { "type": "string", "description": "Repository owner" },
            "repo":  { "type": "string", "description": "Repository name" },
            "title": { "type": "string", "description": "Issue title" },
            "body":  { "type": "string", "description": "Issue body (Markdown)" }
          },
          "required": ["owner", "repo", "title"]
        }
      }
    ]
  }
}

Tool invocation — sent by the host when the model decides to call a tool:

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "create_issue",
    "arguments": {
      "owner": "acme-corp",
      "repo": "backend",
      "title": "Fix race condition in job scheduler",
      "body": "Identified in `src/jobs/scheduler.ts` line 142..."
    }
  }
}

The server executes the call and returns a result object. The model sees the result as part of its context and continues the conversation.


Setting Up Your First MCP Server

The fastest path to a working MCP integration is the official @modelcontextprotocol/server-filesystem package. It gives Gemini CLI read and write access to a directory on your machine, which is useful immediately.

Step 1: Install the server

npm install -g @modelcontextprotocol/server-filesystem

Verify the install worked:

npx @modelcontextprotocol/server-filesystem --version
# 0.6.2

Step 2: Locate your Gemini CLI settings file

Gemini CLI stores its settings in a JSON file. The path depends on your platform:

# macOS / Linux
~/.gemini/settings.json

# Windows
%APPDATA%\gemini\settings.json

If the file does not exist yet, create it:

mkdir -p ~/.gemini
touch ~/.gemini/settings.json

Step 3: Register the server

Open ~/.gemini/settings.json and add the mcpServers block:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/projects"
      ]
    }
  }
}

The command field is the executable Gemini CLI will spawn. The args array is passed verbatim. The last argument here (/Users/yourname/projects) is the root directory the server is allowed to access.

Step 4: Test the connection

Launch Gemini CLI and check that the tool is registered:

gemini

At the prompt, type:

> /mcp

You should see output like:

MCP Servers (1 connected)
─────────────────────────
filesystem    stdio    connected    6 tools available
  - read_file
  - write_file
  - list_directory
  - create_directory
  - move_file
  - search_files

If you see connected and the tool list, the server is working. Now ask Gemini CLI something that uses it:

> List all TypeScript files in /Users/yourname/projects/api/src and tell me which ones exceed 300 lines

Gemini CLI will invoke list_directory and read_file internally, collate the results, and give you a structured answer — all without you doing anything beyond registering the server.


Configuration Deep Dive

The Full mcpServers Schema

Each entry in mcpServers supports these keys:

{
  "mcpServers": {
    "my-server": {
      // Required: the command to execute (for stdio transport)
      "command": "node",

      // Arguments passed to the command
      "args": ["dist/server.js", "--port", "3001"],

      // Environment variables injected into the server process
      "env": {
        "GITHUB_TOKEN": "ghp_xxxxxxxxxxxxxxxxxxxx",
        "LOG_LEVEL": "info"
      },

      // For SSE or streamable-http transports: use url instead of command
      // "url": "http://localhost:3001/sse",

      // Connection timeout in milliseconds (default: 10000)
      "timeout": 15000,

      // Whether to restart the server automatically if it crashes (default: true)
      "autoRestart": true
    }
  }
}

Managing Multiple Servers

You can register as many servers as you need. Gemini CLI loads them all at startup and merges their tool catalogues. If two servers expose a tool with the same name, Gemini CLI will prefix the tool name with the server key to avoid collisions.

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "~/projects"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxxxxxxxxxx"
      }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost:5432/mydb"
      }
    },
    "slack": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-slack"],
      "env": {
        "SLACK_BOT_TOKEN": "xoxb-xxxxxxxxxxxxxxxxxxxx",
        "SLACK_TEAM_ID": "T0XXXXXXXXX"
      }
    }
  }
}

With this configuration Gemini CLI can read files, query GitHub, run SQL queries, and post to Slack all in the same conversation. Ask it to "find all pull requests merged in the last 7 days that touched the payments module, run the relevant tests in the DB, and post the results to #engineering" — and it will try, step by step.

Project-Scoped Configuration

Global ~/.gemini/settings.json applies to every Gemini CLI session. For project-specific servers, create .gemini/settings.json at the project root. Gemini CLI merges project settings on top of global settings, giving project entries higher priority.

my-project/
├── .gemini/
│   └── settings.json     ← project-scoped MCP servers
├── src/
└── package.json

A project-scoped settings file might look like:

{
  "mcpServers": {
    "local-api": {
      "url": "http://localhost:8080/mcp/sse",
      "timeout": 30000
    }
  }
}

This is especially useful for onboarding new contributors — commit the .gemini/settings.json (without secrets) and document environment variable names in .gemini/settings.example.json.


Building a Custom MCP Server

The real power of MCP is the ability to expose any API or data source you control. The following is a complete, production-ready TypeScript MCP server that wraps the GitHub REST API. It goes beyond the official server-github package by adding rate-limit tracking and a search-and-summarise tool.

Prerequisites

mkdir github-mcp-server && cd github-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk @octokit/rest zod
npm install --save-dev typescript @types/node tsx

tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "dist",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src"]
}

The Server Implementation

src/server.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { Octokit } from "@octokit/rest";
import { z } from "zod";

// ─── Initialise Octokit ───────────────────────────────────────────────────────

const token = process.env.GITHUB_TOKEN;
if (!token) {
  process.stderr.write("GITHUB_TOKEN environment variable is required\n");
  process.exit(1);
}

const octokit = new Octokit({ auth: token });

// ─── Initialise MCP Server ────────────────────────────────────────────────────

const server = new McpServer({
  name: "github-enhanced",
  version: "1.0.0",
});

// ─── Tool: list_open_issues ───────────────────────────────────────────────────

server.tool(
  "list_open_issues",
  "Lists open issues for a GitHub repository, optionally filtered by label.",
  {
    owner:  z.string().describe("Repository owner (user or org)"),
    repo:   z.string().describe("Repository name"),
    label:  z.string().optional().describe("Filter by label name"),
    limit:  z.number().int().min(1).max(100).default(20)
             .describe("Maximum number of issues to return"),
  },
  async ({ owner, repo, label, limit }) => {
    const response = await octokit.issues.listForRepo({
      owner,
      repo,
      state: "open",
      labels: label,
      per_page: limit,
    });

    const issues = response.data.map((issue) => ({
      number:    issue.number,
      title:     issue.title,
      author:    issue.user?.login ?? "unknown",
      labels:    issue.labels.map((l) => (typeof l === "string" ? l : l.name ?? "")),
      comments:  issue.comments,
      created_at: issue.created_at,
      url:       issue.html_url,
    }));

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(issues, null, 2),
        },
      ],
    };
  }
);

// ─── Tool: create_issue ───────────────────────────────────────────────────────

server.tool(
  "create_issue",
  "Creates a new issue in a GitHub repository.",
  {
    owner:  z.string().describe("Repository owner"),
    repo:   z.string().describe("Repository name"),
    title:  z.string().min(1).describe("Issue title"),
    body:   z.string().optional().describe("Issue body (Markdown supported)"),
    labels: z.array(z.string()).optional().describe("Labels to attach"),
  },
  async ({ owner, repo, title, body, labels }) => {
    const response = await octokit.issues.create({
      owner,
      repo,
      title,
      body,
      labels,
    });

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            {
              number: response.data.number,
              url:    response.data.html_url,
              state:  response.data.state,
            },
            null,
            2
          ),
        },
      ],
    };
  }
);

// ─── Tool: get_rate_limit ─────────────────────────────────────────────────────

server.tool(
  "get_rate_limit",
  "Returns current GitHub API rate limit status for the authenticated token.",
  {},
  async () => {
    const response = await octokit.rateLimit.get();
    const core = response.data.resources.core;

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            {
              limit:     core.limit,
              remaining: core.remaining,
              used:      core.used,
              reset_at:  new Date(core.reset * 1000).toISOString(),
            },
            null,
            2
          ),
        },
      ],
    };
  }
);

// ─── Tool: search_code ───────────────────────────────────────────────────────

server.tool(
  "search_code",
  "Searches for code across GitHub repositories using GitHub's code search API.",
  {
    query:   z.string().describe("GitHub code search query (e.g. 'useState repo:acme/frontend')"),
    limit:   z.number().int().min(1).max(30).default(10)
              .describe("Maximum number of results to return"),
  },
  async ({ query, limit }) => {
    const response = await octokit.search.code({
      q: query,
      per_page: limit,
    });

    const results = response.data.items.map((item) => ({
      path:       item.path,
      repository: item.repository.full_name,
      url:        item.html_url,
      sha:        item.sha,
    }));

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            { total_count: response.data.total_count, results },
            null,
            2
          ),
        },
      ],
    };
  }
);

// ─── Start server ─────────────────────────────────────────────────────────────

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  process.stderr.write("github-enhanced MCP server running on stdio\n");
}

main().catch((err) => {
  process.stderr.write(`Fatal error: ${err.message}\n`);
  process.exit(1);
});

Build and Register

npx tsc

# Add to ~/.gemini/settings.json
{
  "mcpServers": {
    "github-enhanced": {
      "command": "node",
      "args": ["/absolute/path/to/github-mcp-server/dist/server.js"],
      "env": {
        "GITHUB_TOKEN": "ghp_xxxxxxxxxxxxxxxxxxxx"
      }
    }
  }
}

Restart Gemini CLI and run /mcp. You should see:

MCP Servers (1 connected)
─────────────────────────
github-enhanced    stdio    connected    4 tools available
  - list_open_issues
  - create_issue
  - get_rate_limit
  - search_code

Now you can ask:

> Search for all usages of the deprecated `fetchUser` function across the acme-corp GitHub organisation
  and create issues in each affected repository asking the team to migrate to `getUser`.

Gemini CLI will call search_code, parse the results, group them by repository, and call create_issue for each one — all without leaving the terminal.


Debugging MCP Connections

Enable Debug Logging

The first thing to do when a server is not behaving is turn on verbose output. Set the GEMINI_MCP_DEBUG environment variable before launching:

GEMINI_MCP_DEBUG=1 gemini

This writes every JSON-RPC message to stderr as it flows between Gemini CLI and each server. The output is verbose but invaluable. A successful connection looks like:

[MCP DEBUG] github-enhanced → spawning: node dist/server.js
[MCP DEBUG] github-enhanced ← {"jsonrpc":"2.0","id":0,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"gemini-cli","version":"0.1.x"}}}
[MCP DEBUG] github-enhanced → {"jsonrpc":"2.0","id":0,"result":{"protocolVersion":"2024-11-05","capabilities":{"tools":{}},"serverInfo":{"name":"github-enhanced","version":"1.0.0"}}}
[MCP DEBUG] github-enhanced ← {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}
[MCP DEBUG] github-enhanced → {"jsonrpc":"2.0","id":1,"result":{"tools":[...]}}

Common Error: spawn ENOENT

[MCP ERROR] github-enhanced: spawn ENOENT
Error: Server process failed to start

This means the command in your settings cannot be found. Check that the binary is on your PATH:

which node
# /opt/homebrew/bin/node

which npx
# /opt/homebrew/bin/npx

If the binary exists but Gemini CLI still cannot find it, use an absolute path in the command field:

{
  "command": "/opt/homebrew/bin/node",
  "args": ["/absolute/path/to/dist/server.js"]
}

Common Error: tools/list timeout

[MCP WARN] github-enhanced: tools/list timed out after 10000ms

The server process started but did not respond to tools/list within the timeout window. Common causes:

  1. The server crashed during startup. Run the server manually and watch stderr:

    GITHUB_TOKEN=ghp_xxx node dist/server.js
    # github-enhanced MCP server running on stdio
    

    If it crashes, you will see the error here.

  2. The server is waiting for stdin before responding. Some servers read configuration from stdin at startup, which conflicts with MCP's stdio transport. Check the server's documentation.

  3. The timeout is too short. Some servers initialise heavy resources (database connections, warm caches). Increase the timeout:

    { "timeout": 30000 }
    

Common Error: Schema Validation Failure

[MCP ERROR] Tool call failed: argument validation error
  at create_issue: "title" is required but was not provided

The model attempted to call a tool without supplying a required argument. This usually means the tool's description or inputSchema is unclear. Improve the description to make the required fields more prominent, and ensure the required array in your JSON schema lists them explicitly.

Diagnosing with /mcp status

Run /mcp at any Gemini CLI prompt to see a live status table:

MCP Servers (3 registered, 2 connected, 1 error)
──────────────────────────────────────────────────────
filesystem      stdio    connected    6 tools    0 errors
github-enhanced stdio    connected    4 tools    0 errors
postgres        stdio    error        —          ECONNREFUSED: port 5432

For the errored server, you will see the underlying error message. In this case Postgres is not running. Start it and the server will reconnect on the next Gemini CLI restart (or sooner if autoRestart is true and the process keeps retrying).

Inspecting Raw Traffic with mcp-inspector

The MCP project ships a standalone inspector tool that lets you interact with any server outside of Gemini CLI:

npx @modelcontextprotocol/inspector node dist/server.js

This opens a web UI at http://localhost:5173 where you can:

  • Browse the full tool catalogue
  • Invoke any tool with a form-based UI
  • See raw request and response JSON
  • Measure latency per call

Use the inspector to confirm a server is working correctly before registering it with Gemini CLI. It eliminates Gemini CLI itself as a variable in debugging.


Performance Tips

Connection Pooling for HTTP Transports

If you are running an SSE or streamable-http server, each Gemini CLI session opens a new connection. Under heavy use, you may exhaust the server's connection limit. Use a connection pool in your server implementation:

import { createServer } from "http";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";

// Keep-alive header ensures the OS does not close idle connections
const httpServer = createServer((req, res) => {
  res.setHeader("Connection", "keep-alive");
  res.setHeader("Keep-Alive", "timeout=60, max=100");
  // ... route to SSEServerTransport
});

httpServer.maxConnections = 50;
httpServer.keepAliveTimeout = 65_000; // slightly above the client timeout

Caching Tool Responses

For tools that call slow or rate-limited APIs, add an in-memory cache keyed on the stringified arguments:

import { LRUCache } from "lru-cache";

const cache = new LRUCache<string, unknown>({
  max: 500,
  ttl: 1000 * 60 * 5, // 5-minute TTL
});

server.tool("search_code", "...", { query: z.string() }, async ({ query }) => {
  const cacheKey = `search_code:${query}`;
  const cached = cache.get(cacheKey);
  if (cached) {
    return { content: [{ type: "text", text: JSON.stringify(cached) }] };
  }

  const result = await octokit.search.code({ q: query });
  cache.set(cacheKey, result.data);
  return { content: [{ type: "text", text: JSON.stringify(result.data) }] };
});

This is particularly effective for search tools, where repeated queries within a session are common.

Lazy Initialisation

Avoid doing expensive work in the server's top-level module. Move database connections, SDK authentication, and network calls inside the first tool invocation (or inside a lazy initialisation function):

let _octokitInstance: Octokit | null = null;

function getOctokit(): Octokit {
  if (!_octokitInstance) {
    _octokitInstance = new Octokit({ auth: process.env.GITHUB_TOKEN });
  }
  return _octokitInstance;
}

This keeps server startup fast, which matters for the tools/list timeout window.

Per-Tool Timeout Configuration

Certain tools — like running a long SQL query or spawning a build — can take longer than the default tool-call timeout. Set explicit timeouts at the call site in your server code:

// In your octokit call, use AbortController for fine-grained control
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), 25_000);

try {
  const result = await octokit.search.code(
    { q: query },
    { request: { signal: controller.signal } }
  );
  return result;
} finally {
  clearTimeout(timer);
}

And increase the server-level timeout in settings.json to match:

{
  "github-enhanced": {
    "command": "node",
    "args": ["dist/server.js"],
    "timeout": 30000
  }
}

FAQ

Q: Can I use a remote MCP server hosted on the internet?

Yes, using the streamable-http transport. Set the url key instead of command in your server entry. The server must implement the streamable-http transport specification and handle authentication. Bear in mind that tool calls will be subject to network latency, and you should treat any remote server with the same caution you would apply to any third-party API (what data are you sending? what are their retention policies?).

{
  "mcpServers": {
    "remote-tool": {
      "url": "https://mcp.example.com/gemini/sse",
      "headers": {
        "Authorization": "Bearer YOUR_TOKEN_HERE"
      }
    }
  }
}

Q: My server works with mcp-inspector but not inside Gemini CLI. What is going on?

The most common cause is that Gemini CLI launches the server with a minimal environment — it does not inherit your full shell environment, including PATH, nvm version shims, or shell functions. Switch to absolute paths in command and set every required environment variable explicitly in the env block. You can print the environment your server sees by adding this to the top of your server:

process.stderr.write(JSON.stringify(process.env, null, 2) + "\n");

Q: Can two MCP servers expose a tool with the same name?

Yes, but Gemini CLI will disambiguate by prefixing the server key. A tool named read_file in the filesystem server and another named read_file in a hypothetical s3 server would appear to the model as filesystem__read_file and s3__read_file. Update your tool descriptions to reflect this if you need the model to choose correctly.

Q: How do I pass secrets securely? I do not want to store tokens in plaintext in settings.json.

Use shell environment variables and reference them via your shell profile. Instead of hard-coding the token in settings.json, leave the env key pointing to a variable name you set in ~/.zshrc or ~/.profile:

{
  "env": {
    "GITHUB_TOKEN": "${GITHUB_TOKEN}"
  }
}

Alternatively, integrate with a secrets manager (1Password CLI, pass, macOS Keychain) and write a thin wrapper script that fetches the secret at runtime and launches the real server. Gemini CLI will call the wrapper; the wrapper resolves the secret and execs the server.

Q: What happens if an MCP server crashes mid-conversation?

If autoRestart is set to true (the default), Gemini CLI will attempt to restart the server process. If the restart fails or the server is unreachable, subsequent tool calls to that server will return an error. Gemini CLI will present the error to the model, which can then either retry, use a different tool, or inform you that the capability is temporarily unavailable. This graceful degradation means a crashed server does not abort your entire session.


Conclusion

MCP transforms Gemini CLI from a sophisticated chat client into a genuine automation platform. The protocol is deliberately simple — JSON-RPC messages over stdio or HTTP — which means the barrier to writing a custom server is low. Any language with a JSON library can implement one. The TypeScript SDK makes it even easier, providing typed schemas via Zod and handling the protocol boilerplate for you.

The configuration model rewards incrementalism. Start with a single filesystem or GitHub server, confirm it works with /mcp, and layer in additional servers as you identify the next bottleneck in your workflow. Because each server is an independent process, a broken server never destabilises the others.

If there is one takeaway, it is this: the quality of your MCP integration is determined almost entirely by the quality of your tool descriptions. The model has no other signal for when to call a tool or what to pass as arguments. Invest time in writing precise, example-rich descriptions. A one-sentence description gets you halfway there; a three-sentence description with a concrete example gets you to 95% reliability.

From here, the obvious next steps are to explore the official server registry at github.com/modelcontextprotocol/servers for a growing catalogue of community-built servers, and to read the MCP specification at spec.modelcontextprotocol.io if you need to implement advanced features like resource subscriptions, prompts, or sampling.

The terminal is no longer a boundary. With MCP, it is a gateway.

Zhihao Mu

Zhihao Mu

· Full-stack Developer

Developer and technical writer passionate about AI-powered development tools. Building geminicli.one to help developers unlock the full potential of Gemini CLI.

GitHub Profile

Was this article helpful?