How to Use Gemini MCP Server with Claude Code (2026)
Integrate Google Gemini models into Claude Code via MCP
Hypereal로 구축 시작하기
단일 API를 통해 Kling, Flux, Sora, Veo 등에 액세스하세요. 무료 크레딧으로 시작하고 수백만으로 확장하세요.
신용카드 불필요 • 10만 명 이상의 개발자 • 엔터프라이즈 지원
How to Use Gemini MCP Server with Claude Code (2026)
Claude Code is Anthropic's official CLI tool for AI-assisted development. It is powerful on its own, but with the Model Context Protocol (MCP), you can extend it with external tools and data sources -- including Google's Gemini models.
By connecting a Gemini MCP server to Claude Code, you can use Gemini's strengths (long context windows, free API access, multimodal capabilities) alongside Claude's coding abilities. This guide shows you how to set it up step by step.
What Is MCP?
The Model Context Protocol (MCP) is an open standard created by Anthropic that lets AI assistants connect to external tools and data sources. Think of it as a USB port for AI -- any MCP-compatible tool plugs into any MCP-compatible client.
Key concepts:
- MCP Server -- A program that exposes tools and resources (like a Gemini API wrapper).
- MCP Client -- An AI assistant that uses those tools (like Claude Code).
- Tools -- Functions the AI can call (like "generate text with Gemini").
- Resources -- Data the AI can read (like files, databases, or API responses).
Why Use Gemini with Claude Code?
There are practical reasons to connect Gemini to your Claude Code workflow:
| Benefit | Details |
|---|---|
| Free API access | Gemini offers 1M+ free tokens/day, extending your Claude usage |
| Long context | Gemini 2.5 Pro handles up to 1M tokens for massive codebases |
| Multimodal | Send images and screenshots to Gemini for analysis |
| Second opinion | Compare Claude and Gemini outputs for critical decisions |
| Cost optimization | Route simple tasks to free Gemini, complex tasks to Claude |
Prerequisites
Before starting, make sure you have:
- Claude Code installed -- Install via
npm install -g @anthropic-ai/claude-codeor follow the official docs. - Node.js 18+ -- Required for running MCP servers.
- Google AI Studio API key -- Get one free at aistudio.google.com.
Method 1: Using the Official Gemini MCP Server
Google provides an official MCP server for Gemini. This is the simplest approach.
Step 1: Configure Claude Code
Open your Claude Code MCP settings file. You can do this from within Claude Code:
claude mcp add gemini
Or manually edit the configuration file at ~/.claude/claude_desktop_config.json (or the project-level .mcp.json):
{
"mcpServers": {
"gemini": {
"command": "npx",
"args": ["-y", "@anthropic-ai/gemini-mcp-server"],
"env": {
"GOOGLE_API_KEY": "AIza-your-free-key"
}
}
}
}
Step 2: Verify the Connection
Restart Claude Code and check that the Gemini tools are available:
claude
# Then type: /mcp
You should see the Gemini server listed with its available tools.
Step 3: Use Gemini Tools in Claude Code
Once connected, you can ask Claude Code to use Gemini directly:
Use the Gemini tool to analyze this large codebase summary and identify architectural issues.
Claude Code will automatically route the request to the Gemini MCP server when appropriate.
Method 2: Building a Custom Gemini MCP Server
For more control, you can build your own MCP server that wraps the Gemini API. This lets you add custom tools tailored to your workflow.
Step 1: Initialize the Project
mkdir gemini-mcp-server
cd gemini-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk @google/generative-ai
Step 2: Create the Server
Create index.js:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { GoogleGenerativeAI } from "@google/generative-ai";
import { z } from "zod";
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const server = new McpServer({
name: "gemini-mcp",
version: "1.0.0",
});
// Tool: Generate text with Gemini
server.tool(
"gemini_generate",
"Generate text using Google Gemini",
{
prompt: z.string().describe("The prompt to send to Gemini"),
model: z.string().optional().describe("Model name (default: gemini-2.5-pro)"),
},
async ({ prompt, model }) => {
const geminiModel = genAI.getGenerativeModel({
model: model || "gemini-2.5-pro",
});
const result = await geminiModel.generateContent(prompt);
return {
content: [{ type: "text", text: result.response.text() }],
};
}
);
// Tool: Analyze an image with Gemini
server.tool(
"gemini_analyze_image",
"Analyze an image using Gemini Vision",
{
imagePath: z.string().describe("Path to the image file"),
question: z.string().describe("What to analyze about the image"),
},
async ({ imagePath, question }) => {
const fs = await import("fs");
const imageData = fs.readFileSync(imagePath);
const base64Image = imageData.toString("base64");
const mimeType = imagePath.endsWith(".png") ? "image/png" : "image/jpeg";
const geminiModel = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });
const result = await geminiModel.generateContent([
question,
{ inlineData: { data: base64Image, mimeType } },
]);
return {
content: [{ type: "text", text: result.response.text() }],
};
}
);
// Tool: Summarize a large file with Gemini's long context
server.tool(
"gemini_summarize_file",
"Summarize a large file using Gemini's 1M token context window",
{
filePath: z.string().describe("Path to the file to summarize"),
instruction: z.string().optional().describe("Specific summarization instruction"),
},
async ({ filePath, instruction }) => {
const fs = await import("fs");
const content = fs.readFileSync(filePath, "utf-8");
const geminiModel = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });
const prompt = instruction
? `${instruction}\n\nFile content:\n${content}`
: `Summarize this file, highlighting key components, functions, and architecture:\n\n${content}`;
const result = await geminiModel.generateContent(prompt);
return {
content: [{ type: "text", text: result.response.text() }],
};
}
);
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
Update package.json:
{
"type": "module",
"main": "index.js",
"bin": {
"gemini-mcp": "./index.js"
}
}
Step 3: Register with Claude Code
Add your custom server to Claude Code's MCP configuration:
{
"mcpServers": {
"gemini": {
"command": "node",
"args": ["/path/to/gemini-mcp-server/index.js"],
"env": {
"GOOGLE_API_KEY": "AIza-your-free-key"
}
}
}
}
Step 4: Test the Integration
Start Claude Code and try these commands:
Use the gemini_generate tool to explain the Observer pattern with a TypeScript example.
Use the gemini_summarize_file tool to summarize ./src/index.ts
Use the gemini_analyze_image tool to describe the UI in ./screenshot.png
Practical Use Cases
1. Code Review with Two Models
Ask Claude Code to review your code, then cross-check with Gemini:
Review the file ./src/auth.ts for security issues.
Then use the gemini_generate tool to also review the same code and compare the findings.
2. Large Codebase Analysis
Gemini's 1M token context window can process entire codebases that exceed Claude's context limits:
Use the gemini_summarize_file tool to analyze the concatenated output
of all files in ./src/ and identify the overall architecture.
3. Screenshot-to-Code
Use Gemini's vision capabilities on UI screenshots:
Use the gemini_analyze_image tool to describe the layout in ./mockup.png,
then write the React component to match it.
Troubleshooting
| Issue | Solution |
|---|---|
Server not appearing in /mcp |
Restart Claude Code. Check the config JSON syntax. |
| "API key not valid" | Verify your key at aistudio.google.com. Ensure the env variable is set. |
| Timeout errors | Gemini free tier can be slow during peak hours. Add retry logic. |
| "Model not found" | Check the model name. Use gemini-2.5-pro or gemini-2.0-flash. |
| Permission denied | Ensure Node.js has access to the file paths you are referencing. |
Tips for Effective Use
- Route by complexity. Use Gemini Flash for simple tasks (free, fast) and Claude for complex reasoning.
- Use structured output. Ask Gemini to return JSON for easier parsing in your MCP tools.
- Cache results. Store Gemini responses to avoid redundant API calls.
- Combine tools. Chain Gemini analysis with Claude's code generation for best results.
Wrapping Up
Connecting Gemini to Claude Code via MCP gives you the best of both models: Claude's superior coding abilities and Gemini's free API access, long context window, and multimodal capabilities. The setup takes under 10 minutes, and the workflow improvements are immediate.
If you are building AI-powered applications that need media generation -- images, video, avatars, or voice -- alongside your LLM workflows, try Hypereal AI free -- 35 credits, no credit card required. It provides API access to 50+ media models that pair well with any development environment.
