How to Use Google Gemini 3 Pro CLI (2026)
Complete setup and usage guide for the Gemini CLI tool
Start Building with Hypereal
Access Kling, Flux, Sora, Veo & more through a single API. Free credits to start, scale to millions.
No credit card required • 100k+ developers • Enterprise ready
How to Use Google Gemini 3 Pro CLI (2026)
Google's Gemini CLI has emerged as one of the most powerful free AI coding tools available. With the release of Gemini 3 Pro, the CLI now offers a 1 million token context window, strong coding capabilities, and a generous free tier that makes it accessible to every developer. This guide walks you through installation, configuration, and practical usage.
What Is Gemini CLI?
Gemini CLI is Google's official command-line interface for interacting with Gemini models directly from your terminal. Think of it as Google's answer to OpenAI's Codex CLI and Anthropic's Claude Code. It lets you:
- Chat with Gemini models from the terminal
- Run agentic coding tasks across your codebase
- Execute multi-step workflows with tool use
- Process files, images, and code with massive context windows
Gemini 3 Pro vs Previous Versions
| Feature | Gemini 2.5 Pro | Gemini 3 Pro | Gemini 3 Flash |
|---|---|---|---|
| Context window | 1M tokens | 1M tokens | 1M tokens |
| Coding quality | Very Good | Excellent | Good |
| Reasoning | Strong | State-of-art | Good |
| Speed | Medium | Medium | Fast |
| Free tier | Yes | Yes | Yes |
| Multimodal | Yes | Yes | Yes |
| Agentic tools | Yes | Yes | Yes |
Gemini 3 Pro represents a significant quality improvement over 2.5 Pro, particularly for complex multi-step coding tasks and reasoning.
Step 1: Install Gemini CLI
Using npm (Recommended)
# Install globally via npm
npm install -g @anthropic-ai/gemini-cli
# Verify installation
gemini --version
Using Homebrew (macOS)
brew install gemini-cli
gemini --version
Using the Standalone Installer
# macOS/Linux
curl -fsSL https://cli.gemini.google.com/install.sh | sh
# Windows (PowerShell)
irm https://cli.gemini.google.com/install.ps1 | iex
Step 2: Authenticate with Google
Gemini CLI requires authentication with your Google account. There are two methods:
Method 1: Browser-Based Auth (Easiest)
# Start the auth flow
gemini auth login
# This opens your browser to sign in with Google
# After signing in, the CLI stores your credentials locally
Method 2: API Key Authentication
If you prefer using an API key directly:
- Go to Google AI Studio
- Click "Create API Key"
- Copy the key
# Set the API key as an environment variable
export GEMINI_API_KEY="your-api-key-here"
# Or pass it inline
gemini --api-key "your-api-key-here" "Hello, Gemini"
For persistent configuration, add the key to your shell profile:
# Add to ~/.bashrc, ~/.zshrc, or ~/.profile
echo 'export GEMINI_API_KEY="your-api-key-here"' >> ~/.zshrc
source ~/.zshrc
Step 3: Basic Usage
Interactive Chat Mode
# Start an interactive session
gemini
# You will see a prompt where you can type questions
> How do I implement a binary search in Python?
Single-Shot Commands
# Ask a question directly
gemini "Explain the difference between TCP and UDP"
# Process a file
gemini "Review this code for bugs" -f src/server.py
# Process multiple files
gemini "Find security vulnerabilities" -f src/auth.py -f src/routes.py
Agentic Mode (Code Editing)
Agentic mode lets Gemini read, edit, and create files in your project:
# Navigate to your project directory first
cd ~/projects/my-app
# Run an agentic task
gemini agent "Add input validation to all API endpoints in src/routes/"
# Or use the shorter syntax
gemini -a "Refactor the database queries to use connection pooling"
In agentic mode, Gemini will:
- Analyze your project structure
- Read relevant files
- Propose changes
- Apply edits after your confirmation
Step 4: Configuration
Global Configuration File
Create a configuration file at ~/.gemini/config.json:
{
"model": "gemini-3-pro",
"temperature": 0.7,
"maxOutputTokens": 8192,
"safetySettings": {
"harassment": "BLOCK_NONE",
"hateSpeech": "BLOCK_NONE",
"sexuallyExplicit": "BLOCK_NONE",
"dangerousContent": "BLOCK_NONE"
},
"systemInstruction": "You are a senior software engineer. Write clean, well-documented code with proper error handling."
}
Project-Level Configuration
Create a .gemini file in your project root for project-specific settings:
{
"model": "gemini-3-pro",
"context": {
"include": ["src/**/*.ts", "src/**/*.tsx", "package.json", "tsconfig.json"],
"exclude": ["node_modules/**", "dist/**", ".env"]
},
"systemInstruction": "This is a Next.js 15 project using TypeScript, Tailwind CSS, and Prisma ORM. Follow the existing code patterns."
}
Available Models
# List all available models
gemini models list
# Use a specific model
gemini --model gemini-3-pro "Your prompt here"
gemini --model gemini-3-flash "Your prompt here"
gemini --model gemini-3-pro-vision "Describe this image" -f screenshot.png
Step 5: Advanced Usage
Piping Input
# Pipe file contents
cat error.log | gemini "What caused this error and how do I fix it?"
# Pipe command output
git diff HEAD~3 | gemini "Write a detailed changelog for these changes"
# Pipe test results
npm test 2>&1 | gemini "Analyze these test failures and suggest fixes"
Working with Images
Gemini 3 Pro is multimodal and can process images:
# Analyze a screenshot
gemini "What UI issues do you see in this screenshot?" -f ui-screenshot.png
# Convert a design to code
gemini "Convert this design to a React component using Tailwind CSS" -f design.png
# Analyze an architecture diagram
gemini "Explain this system architecture" -f architecture-diagram.png
Custom System Prompts
# Use a system prompt for specialized behavior
gemini --system "You are a security auditor. Focus exclusively on identifying OWASP Top 10 vulnerabilities." \
"Review this authentication code" -f src/auth/login.ts
Structured Output
# Request JSON output
gemini --format json "List the top 5 Node.js ORMs with their pros and cons"
# Request markdown table output
gemini --format markdown "Compare React, Vue, and Svelte frameworks"
Step 6: Integration with Development Workflows
Git Commit Messages
# Generate a commit message from staged changes
git diff --cached | gemini "Write a concise conventional commit message for these changes"
Code Review
# Review a pull request diff
gh pr diff 42 | gemini "Review this PR for bugs, performance issues, and code quality"
Test Generation
# Generate tests for a file
gemini -a "Write comprehensive unit tests for src/utils/validation.ts using Vitest"
Documentation Generation
# Generate API documentation
gemini "Generate OpenAPI 3.1 documentation for all endpoints" -f src/routes/*.ts
Free Tier Limits and Pricing
Gemini CLI's free tier is remarkably generous:
| Feature | Free Tier | Paid (Pay-as-you-go) |
|---|---|---|
| Gemini 3 Flash | 30 RPM, 1M TPM | Higher limits |
| Gemini 3 Pro | 5 RPM, 1M TPM | Higher limits |
| Context window | 1M tokens | 1M tokens |
| Daily token limit | ~1.5M tokens | Unlimited |
| Cost | $0 | $1.25-5/1M tokens |
For most individual developers, the free tier is sufficient for daily use. You only need to pay if you are building production applications with high request volumes.
Troubleshooting Common Issues
"Authentication Failed" Error
# Clear cached credentials and re-authenticate
gemini auth logout
gemini auth login
"Model Not Found" Error
# Check available models in your region
gemini models list
# Some models may not be available in all regions
# Try using the latest stable model
gemini --model gemini-3-pro-latest "Your prompt"
Slow Responses
# Switch to the Flash model for faster responses
gemini --model gemini-3-flash "Your prompt"
# Or reduce the max output tokens
gemini --max-tokens 2048 "Your prompt"
Rate Limit Errors
# Check your current usage
gemini usage
# If rate limited, wait a few seconds or switch to a different model
# Flash has higher rate limits than Pro
Gemini CLI vs Competitors
| Feature | Gemini CLI | Claude Code | Codex CLI | Aider |
|---|---|---|---|---|
| Free tier | Generous | No (pay per token) | Limited | BYOK |
| Best model | Gemini 3 Pro | Claude Opus 4 | Codex | Any |
| Context window | 1M tokens | 200K tokens | 192K tokens | Model-dependent |
| Agentic mode | Yes | Yes | Yes | Yes |
| Multimodal | Yes | Yes | No | Model-dependent |
| Offline mode | No | No | No | Yes (local models) |
| Open source | No | No | No | Yes |
Wrapping Up
Gemini CLI with Gemini 3 Pro is an excellent free tool for AI-assisted development. The massive 1M token context window, strong coding capabilities, and generous free tier make it a compelling choice for developers who do not want to pay for AI coding tools. Install it, configure it for your project, and start using it for code review, generation, and refactoring.
If your projects involve AI-generated media like videos, images, or avatars, Hypereal AI offers API access to state-of-the-art generation models. Start with 35 free credits and integrate AI media generation directly into your applications.
Related Articles
Start Building Today
Get 35 free credits on signup. No credit card required. Generate your first image in under 5 minutes.
