troubleshootingdebugginginstallationguide

Complete Gemini CLI Troubleshooting Guide: 15 Common Issues Solved

Comprehensive troubleshooting reference for Gemini CLI users — installation, authentication, configuration, runtime, and advanced integration issues with verified fixes.

Zhihao MuZhihao Mu
11 min read

Introduction

Gemini CLI sits in a tricky spot in a developer's toolchain. It is powerful — a million-token context window, deep filesystem integration, support for Model Context Protocol servers — but every point of power is also a point of failure. Authentication fails silently. Configuration files overlap in surprising ways. Rate limits appear out of nowhere when you start batching. Sandbox and MCP integration introduce new classes of error that did not exist in simpler CLIs.

What makes Gemini CLI issues frustrating is the stage gap. A user who hits a problem at the MCP integration stage has already successfully installed, authenticated, configured, and run simple commands. They are not a beginner. But the error messages from the failing stage are often written with no memory that those earlier stages went fine. Generic advice ("check your API key") is not helpful when the real problem is that your MCP server is blocking on stdin.

This guide is organized by the five stages where problems actually appear — installation, authentication, configuration, runtime, and advanced integration. For each stage we cover three common issues, the diagnostic commands that isolate them, and links to dedicated guides that walk through the complete fix. Every fix here has been verified on a current installation. Use the TL;DR to jump to your stage, or read top-to-bottom for a complete mental model of where Gemini CLI can break.

If you are starting from scratch and have not yet hit your first error, begin at our Quick Start — a 15-minute path from zero to a working gemini command on your machine. Come back to this guide when something breaks.

TL;DR

  • Installation failures are almost always a PATH or permissions issue — not a package issue.
  • Authentication errors fall into two buckets: wrong key (quick fix) and wrong location of the right key (slow fix).
  • Configuration conflicts happen because Gemini CLI reads four different config sources with non-obvious precedence.
  • Runtime hangs and rate limit errors share a root cause — the CLI is waiting on something and not telling you what.
  • Advanced features (sandbox, MCP, custom tools) require strict boundary discipline between what is invoked and what is trusted.

Category 1: Installation Problems

Installation is where the largest number of first-time users bounce off. The official documentation assumes you have a working Node.js environment and a clean PATH — assumptions that break on Apple Silicon Macs with multiple Node versions, on Windows machines that inherited PATH entries from other tools, and on Linux systems where the user's shell does not source the expected rc file.

The three most common installation issues are: the CLI installs successfully but gemini is not found in the shell; the install itself fails with permission errors; and installation succeeds but points to the wrong Node version, causing silent runtime errors that only appear on first command execution.

For macOS — especially Apple Silicon where Homebrew and npm live in different paths — the diagnostic sequence is which gemini, echo $PATH, node --version. If any of those give an unexpected answer, the fix is shell-configuration, not reinstall. For the full step-by-step including Rosetta-vs-native considerations and zsh/bash shell differences, see our dedicated guide: Install Gemini CLI on macOS. It covers Homebrew, npm, and manual install paths with explicit permission-fix commands.

For Windows, the gotchas differ: PATH entries are managed in the System Properties GUI, not a shell config file, and a stale Node from Visual Studio or Chocolatey can shadow the new install. The failure mode is usually "command not found in PowerShell but works in Command Prompt" or vice versa. Our Windows guide walks through PATH troubleshooting for both shells and the correct way to uninstall a legacy Node: Install Gemini CLI on Windows.

Linux installations are usually cleanest because the package ecosystem is well-understood, but users on Debian-family systems often hit permission errors when npm tries to write to /usr/local/lib/node_modules without sudo. The correct fix is not sudo npm install -g (which creates permission debt for later) but setting a user-level npm prefix. The canonical sequence for Ubuntu, Debian, Fedora, and Arch is covered in Install Gemini CLI on Linux.

Category 2: Authentication Errors

Once the CLI is installed and reachable, the next wall is authentication. Gemini CLI needs an API key from Google AI Studio, and the three things that can go wrong are: you have no key yet and the documentation is fragmented across Google products; you have a key but it is in the wrong environment variable or config file; or you have the right key in the right place but it is rejected because of a quota or region mismatch.

If you have never generated a key, start at How to Get a Gemini API Key. That guide covers both the free-tier key from AI Studio and the paid-tier options through Google Cloud Vertex AI — two different code paths with different rate limits and different billing implications. Getting this choice wrong at the start creates problems that only become visible weeks later when you hit quota walls.

If you have a key but the CLI does not use it, the issue is almost always one of three things: GEMINI_API_KEY is set in the wrong shell (login vs. interactive, bash vs. zsh), the key is in a .env file that is loaded by your project but not by the global CLI, or multiple keys are set and a lower-priority one is being picked first. The resolution order is non-obvious and covered in Configure Your Gemini API Key with a gemini config list diagnostic command that shows exactly which key is active.

For errors that appear even with the right key in the right place — 403 Forbidden, 401 Unauthorized, or the peculiar Key is valid but not authorized for this model — the problem is usually region, project binding, or model-tier mismatch. Our authentication errors reference page maps every Google API error code to its most common root cause: Fix Gemini CLI Authentication Errors. A single command — gemini debug --auth — will often pinpoint the exact mismatch in under a minute.

Category 3: Configuration Conflicts

Gemini CLI reads configuration from four sources in strict precedence order: command-line flags, environment variables, the project-level .gemini.json or gemini.config.ts, and the global config at ~/.gemini/config.json. When two of these disagree — for example, your project file sets a 1M token context while your global file sets 128k — the CLI does not warn; it silently uses the higher-priority source. The symptom is "my settings are being ignored."

The first stop when configuration behaves unexpectedly is our deep-dive on where settings actually live: Gemini CLI Configuration Files Explained. That page lists every config location, precedence rule, and supported field — and crucially, the gemini config resolve command that prints the effective merged config so you can see exactly what the CLI is using.

A related trap is environment variable overrides. Several Gemini CLI settings can be overridden by environment variables named in a specific pattern (GEMINI_*), and these variables leak between shells, CI pipelines, and Docker containers in surprising ways. When a config change "works locally but not in CI," the environment-variable layer is almost always the culprit. We catalog the full list of supported env vars and their interaction rules in Gemini CLI Environment Variables.

If you use Model Context Protocol (MCP) servers for tool integration, a third configuration layer enters the picture: the MCP server manifest itself. Misconfigured MCP servers fail silently on start and only surface later when the CLI tries to call a tool that the server never registered. The two most common mistakes are command paths with unresolved ~ or $HOME, and servers that require environment variables that the CLI does not forward by default. Both are covered in MCP Configuration Reference.

Category 4: Runtime Issues

Once installed, authenticated, and configured correctly, the CLI will mostly just work — until it does not. Runtime problems split into three shapes: the CLI hangs with no output, the CLI returns rate-limit errors under what feels like reasonable load, and commands that worked yesterday fail today with no local changes.

The hang problem is the most disorienting because there is no error to search for. You type a command, press enter, and nothing happens. The CLI is waiting on something — a network call, a stuck MCP server, a blocked file read, or a pending confirmation prompt — but has not surfaced that wait state. Our hang-debugging playbook in Why Gemini CLI Is Not Responding walks through the gemini --verbose, gemini --debug, and process-inspection commands that isolate which layer is stuck, and provides a decision tree from observed behavior to root cause.

Rate limit errors have two flavors: the legitimate "you exceeded your per-minute quota" (solved by backoff, batching, or upgrading tier) and the misleading "request looks rate-limited but is really a malformed retry loop." If your CLI is retrying a failed call in a tight loop, you will hit rate limits fast even at low real-world throughput. Our guide Handle Gemini CLI Rate Limits includes the exact exponential-backoff pattern Google recommends and the one-line config change that enables it.

For commands that silently broke overnight — typical symptom: the CLI runs but returns wrong or truncated output — the cause is usually an upstream model version change, a quota policy update, or a broken MCP server dependency. The diagnostic sequence is gemini version, gemini debug --env, and gemini debug --mcp in that order. Each step narrows the suspect list. Full walkthrough in Debug Gemini CLI Issues.

Category 5: Advanced Integration

The advanced features — sandbox isolation, custom tool definitions, and MCP-mediated integrations — give Gemini CLI its power but also its most surprising failure modes. These problems do not appear for beginners. They appear when you are productive enough with the CLI to push against its boundaries.

Sandbox mode runs Gemini CLI's file operations inside a restricted namespace for safety. When it works, you do not notice it. When it does not, the symptom is usually "the model says it edited the file but my file did not change." The root cause is almost always a path that was resolved outside the sandbox root. Understand Gemini CLI Sandbox explains the sandbox model, how to inspect the active root, and the --no-sandbox override that is sometimes necessary for legitimate cross-directory work (and sometimes a security risk you should not enable).

Custom tools let you extend Gemini CLI with project-specific functions — database queries, deployment triggers, Slack notifications. The gap between a custom tool that the CLI registers and one the CLI actually invokes is where most debugging time goes. The tool schema must match exactly, the execution permissions must be correct, and the CLI must know when the model is allowed to call it. Define Custom Gemini CLI Tools covers schema validation, per-tool permission gates, and the safe default for tools that write to disk.

MCP integration is the deepest rabbit hole. A Model Context Protocol server is a separate process that Gemini CLI speaks to over stdio. Problems appear at three interfaces: the server fails to start (hangs on handshake), the server starts but does not register tools (silent), or the server registers tools but returns errors when called. Each failure mode has a distinct diagnostic signature. See Gemini CLI MCP Integration for the full interface contract, the mcp-inspector companion tool we recommend, and a reference list of battle-tested MCP servers that we know work with the current Gemini CLI release.

When to Escalate

If you have worked through the diagnostic steps for your category and are still stuck, you have a few paths. First, run gemini debug --all > debug.txt and include the output when you file a bug — the Gemini CLI team triages reports much faster when this is attached. Second, search the official GitHub issue tracker before opening a new one; a surprising number of edge cases have existing issues with workarounds buried in the comments. Third, the community Discord has active channels by category (installation, MCP, advanced) where someone has usually seen your exact error. For infrastructure-related errors (quotas, billing, project access), Google Cloud support is the only path — the CLI team cannot override platform-level limits.

Conclusion

Gemini CLI breaks in predictable places for a predictable reason: it sits at the intersection of a local shell environment, a remote API, and an open plugin ecosystem (MCP). Every one of those interfaces has its own failure modes, and the CLI cannot always surface which interface is misbehaving. The strategy that works is stage-by-stage elimination — confirm install, confirm auth, confirm config, then run — rather than reading error messages literally.

Bookmark this guide. The fastest way to unblock yourself next time is to jump straight to the category, run the first diagnostic command listed, and follow the deep-link to the fix. We update these pages as the CLI changes and as new failure modes emerge; if you hit an issue not covered, the feedback form on any of the linked pages goes directly to the authors.

Zhihao Mu

Zhihao Mu

· Full-stack Developer

Developer and technical writer passionate about AI-powered development tools. Building geminicli.one to help developers unlock the full potential of Gemini CLI.

GitHub Profile

Was this article helpful?