How to Use Claude Code with OpenRouter (2026)
Route Claude Code through OpenRouter for flexible model access
Start Building with Hypereal
Access Kling, Flux, Sora, Veo & more through a single API. Free credits to start, scale to millions.
No credit card required • 100k+ developers • Enterprise ready
How to Use Claude Code with OpenRouter (2026)
Claude Code is Anthropic's powerful CLI coding agent, and OpenRouter is a unified API gateway that routes requests to dozens of AI model providers. By combining them, you can use Claude Code's agentic capabilities while gaining access to a broader range of models and more flexible billing.
This guide shows you exactly how to set up Claude Code with OpenRouter, configure different models, manage costs, and troubleshoot common issues.
Why Use OpenRouter with Claude Code?
There are several practical reasons to route Claude Code through OpenRouter:
| Benefit | Description |
|---|---|
| Multi-model access | Switch between Claude, GPT-5, Gemini, DeepSeek, and more |
| Pay-as-you-go billing | No monthly commitment, pay per token |
| Rate limit pooling | OpenRouter manages rate limits across providers |
| Fallback routing | Automatic failover if one provider is down |
| Cost tracking | Detailed usage dashboards and spending controls |
| Single API key | One key for all providers |
Prerequisites
You need:
- Claude Code installed (
npm install -g @anthropic-ai/claude-code) - An OpenRouter account at openrouter.ai
- Credits added to your OpenRouter account (minimum $5 recommended to start)
Step 1: Get Your OpenRouter API Key
- Sign in to openrouter.ai
- Go to Keys in the dashboard
- Click Create Key
- Name it something descriptive like "claude-code-cli"
- Copy the key (starts with
sk-or-)
Add credits to your account under Credits. OpenRouter charges per token based on the model you use.
Step 2: Configure Claude Code
Claude Code supports custom API endpoints through environment variables. Set these in your shell profile:
Option A: Environment Variables (Recommended)
Add to your ~/.zshrc, ~/.bashrc, or ~/.bash_profile:
# OpenRouter configuration for Claude Code
export ANTHROPIC_BASE_URL="https://openrouter.ai/api/v1"
export ANTHROPIC_API_KEY="sk-or-your-openrouter-key-here"
Reload your shell:
source ~/.zshrc # or source ~/.bashrc
Option B: Per-Session Configuration
If you only want to use OpenRouter temporarily:
ANTHROPIC_BASE_URL="https://openrouter.ai/api/v1" \
ANTHROPIC_API_KEY="sk-or-your-key" \
claude
Option C: Using a .env File
Create a .env file in your project root:
ANTHROPIC_BASE_URL=https://openrouter.ai/api/v1
ANTHROPIC_API_KEY=sk-or-your-openrouter-key-here
Note: Claude Code does not automatically load .env files. You need a tool like direnv or source it manually:
# Using direnv (auto-loads .env when you enter the directory)
brew install direnv # macOS
echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc
# Or source manually
source .env && claude
Step 3: Verify the Connection
Start Claude Code and check that it connects through OpenRouter:
claude
You should see Claude Code initialize normally. Run a simple test:
You: What model are you? What provider is handling this request?
Claude should respond indicating it is running through OpenRouter.
You can also verify by checking the /cost command:
/cost
This will show token usage and costs, which should reflect OpenRouter pricing.
Step 4: Switch Models
One of the biggest advantages of OpenRouter is access to multiple models. You can specify which model to use when launching Claude Code:
# Use Claude Sonnet 4 (default for Claude Code)
claude --model anthropic/claude-sonnet-4
# Use Claude Opus 4 for complex tasks
claude --model anthropic/claude-opus-4
# Use GPT-5 through OpenRouter
claude --model openai/gpt-5
# Use Gemini 2.5 Pro
claude --model google/gemini-2.5-pro
# Use DeepSeek V3 (much cheaper)
claude --model deepseek/deepseek-chat-v3
Available Models and Pricing
Here are the most popular models available through OpenRouter for coding tasks:
| Model | Input Cost | Output Cost | Context | Best For |
|---|---|---|---|---|
| claude-opus-4 | $15/M | $75/M | 200K | Complex reasoning |
| claude-sonnet-4 | $3/M | $15/M | 200K | Everyday coding |
| gpt-5 | $5/M | $15/M | 200K | Code generation |
| gpt-4o | $2.5/M | $10/M | 128K | Fast, affordable |
| gemini-2.5-pro | $1.25/M | $5/M | 1M | Long context |
| deepseek-chat-v3 | $0.27/M | $1.10/M | 128K | Budget coding |
Prices are approximate and may vary. Check openrouter.ai/models for current pricing.
Step 5: Set Up Model Aliases
To make switching easier, create shell aliases:
# Add to ~/.zshrc or ~/.bashrc
alias claude-opus='claude --model anthropic/claude-opus-4'
alias claude-sonnet='claude --model anthropic/claude-sonnet-4'
alias claude-gpt5='claude --model openai/gpt-5'
alias claude-cheap='claude --model deepseek/deepseek-chat-v3'
alias claude-long='claude --model google/gemini-2.5-pro'
Now you can quickly pick the right model for the job:
claude-cheap "explain what this function does" # Budget task
claude-opus "architect a microservices migration" # Complex task
claude-long "analyze this entire codebase" # Long context task
Step 6: Configure Cost Controls
OpenRouter provides spending controls to prevent unexpected bills.
Set a Spending Limit
In the OpenRouter dashboard:
- Go to Settings > Limits
- Set a monthly spending limit (e.g., $50)
- Optionally set per-key limits
Monitor Usage
Check your usage in real time:
# View current session cost in Claude Code
/cost
# Check OpenRouter dashboard for historical usage
open https://openrouter.ai/activity
Cost Optimization Tips
| Strategy | Savings |
|---|---|
| Use DeepSeek for simple tasks | 10-50x cheaper than Claude Opus |
| Use Gemini for long-context tasks | Cheaper per token for large inputs |
Use /compact in Claude Code |
Reduces context size mid-session |
| Break large tasks into focused sessions | Avoids sending full context repeatedly |
| Use print mode for one-shot tasks | claude -p "quick question" uses fewer tokens |
Advanced Configuration
Fallback Models
OpenRouter supports automatic fallback. If your primary model is rate-limited or down, requests automatically route to a backup:
# Configure fallback in the request (via custom headers)
export OPENROUTER_FALLBACK_MODELS="anthropic/claude-sonnet-4,openai/gpt-4o"
Note: Fallback configuration depends on OpenRouter's API settings. Check their documentation for the latest approach.
Custom Headers
You can pass additional headers to OpenRouter for tracking:
export OPENROUTER_REFERRER="https://your-app.com"
export OPENROUTER_APP_TITLE="My Claude Code Setup"
Using with a Proxy
If you are behind a corporate proxy:
export HTTPS_PROXY="http://proxy.company.com:8080"
export ANTHROPIC_BASE_URL="https://openrouter.ai/api/v1"
export ANTHROPIC_API_KEY="sk-or-your-key"
claude
Practical Workflows
Workflow 1: Cost-Efficient Development
Use a cheap model for exploration and a premium model for implementation:
# Explore and understand the codebase (cheap)
claude-cheap "explain the architecture of this project"
# Implement a feature (premium)
claude-sonnet "implement user authentication with JWT refresh tokens"
# Review and refine (medium)
claude-gpt5 "review the authentication implementation for security issues"
Workflow 2: Long-Context Analysis
When you need to analyze a large codebase:
# Use Gemini's 1M token context
claude-long "analyze all the files in src/ and create a dependency graph"
Workflow 3: Automated CI/CD
Use Claude Code with OpenRouter in your CI pipeline:
# .github/workflows/code-review.yml
name: AI Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm install -g @anthropic-ai/claude-code
- run: |
gh pr diff ${{ github.event.pull_request.number }} | \
ANTHROPIC_BASE_URL="https://openrouter.ai/api/v1" \
ANTHROPIC_API_KEY="${{ secrets.OPENROUTER_KEY }}" \
claude -p --model deepseek/deepseek-chat-v3 \
"Review this diff for bugs and security issues. Be concise."
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Troubleshooting
| Issue | Solution |
|---|---|
| "Invalid API key" | Verify your OpenRouter key starts with sk-or- and has credits |
| "Model not found" | Check the exact model ID at openrouter.ai/models |
| Slow responses | Try a different model or check OpenRouter status page |
| "Rate limited" | Wait or switch to a different model/provider |
| "Context too long" | Use /compact or switch to a model with a larger context window |
| "Connection refused" | Check ANTHROPIC_BASE_URL is correct and you have internet access |
| Unexpected costs | Set spending limits in the OpenRouter dashboard |
Reverting to Direct Anthropic
To switch back to using Anthropic directly:
# Remove OpenRouter config
unset ANTHROPIC_BASE_URL
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
claude
Conclusion
Using Claude Code with OpenRouter gives you the best of both worlds: Anthropic's powerful coding agent with the flexibility and cost optimization of a multi-provider gateway. You can start with budget models for exploration, switch to premium models for complex tasks, and track every dollar through a single dashboard.
If your development work involves AI-generated media like images, video, or audio, Hypereal AI offers production-ready media generation APIs at affordable prices. Pair them with your Claude Code + OpenRouter setup to build complete AI-powered applications.
Related Articles
Start Building Today
Get 35 free credits on signup. No credit card required. Generate your first image in under 5 minutes.
