Best Open Source Tools to Monitor Claude Code Usage (2026)
Track token consumption, costs, and usage patterns
Start Building with Hypereal
Access Kling, Flux, Sora, Veo & more through a single API. Free credits to start, scale to millions.
No credit card required • 100k+ developers • Enterprise ready
Best Open Source Tools to Monitor Claude Code Usage (2026)
Claude Code is a powerful AI coding assistant, but its token-based pricing can lead to surprising bills if you are not tracking usage carefully. A single complex agentic task can consume hundreds of thousands of tokens, and without monitoring, costs can spiral quickly.
This guide covers the best open-source tools for monitoring Claude Code usage, tracking token consumption, setting budget alerts, and optimizing your spending.
Why Monitor Claude Code Usage?
Before diving into tools, here is why monitoring matters:
| Claude Model | Input Cost (1M tokens) | Output Cost (1M tokens) | Typical Daily Use | Daily Cost |
|---|---|---|---|---|
| Claude Opus 4 | $15.00 | $75.00 | 100K in / 50K out | $5.25 |
| Claude Sonnet 4 | $3.00 | $15.00 | 200K in / 100K out | $2.10 |
| Claude Haiku 3.5 | $0.80 | $4.00 | 300K in / 150K out | $0.84 |
A heavy Claude Code user running Opus 4 can easily spend $100-200/month. Without visibility into where those tokens go, it is impossible to optimize.
Tool 1: ccusage (Claude Code Usage Tracker)
ccusage is the most popular open-source tool specifically built for tracking Claude Code usage.
What It Does
- Reads Claude Code's local session logs
- Calculates token usage and estimated costs per session
- Generates daily, weekly, and monthly usage reports
- Shows usage breakdown by project and task type
- Exports data to CSV for further analysis
Installation
# Install via npm
npm install -g ccusage
# Or using npx without installation
npx ccusage
Usage
# Show usage summary for today
ccusage
# Show usage for the last 7 days
ccusage --days 7
# Show usage for a specific date range
ccusage --from 2026-01-01 --to 2026-01-31
# Export to CSV
ccusage --days 30 --format csv > claude-usage-january.csv
# Show detailed per-session breakdown
ccusage --detailed
# Show usage by project directory
ccusage --by-project
Sample Output
Claude Code Usage Report (Last 7 Days)
=======================================
Total Tokens: 1,245,670
Input: 892,340
Output: 353,330
Estimated Cost: $19.83
Input: $2.68 (892K @ $3.00/1M)
Output: $5.30 (353K @ $15.00/1M)
Daily Breakdown:
Mon Feb 02: 180,200 tokens ($2.87)
Tue Feb 03: 210,450 tokens ($3.35)
Wed Feb 04: 156,780 tokens ($2.49)
Thu Feb 05: 298,120 tokens ($4.74)
Fri Feb 06: 400,120 tokens ($6.38)
Top Projects:
~/projects/web-app 520,300 tokens ($8.28)
~/projects/api-server 380,200 tokens ($6.05)
~/projects/cli-tool 345,170 tokens ($5.50)
Configuration
// ~/.ccusage/config.json
{
"defaultDays": 7,
"currency": "USD",
"models": {
"claude-opus-4-20250514": {
"inputCost": 15.0,
"outputCost": 75.0
},
"claude-sonnet-4-20250514": {
"inputCost": 3.0,
"outputCost": 15.0
}
},
"budgetAlert": {
"daily": 10.0,
"weekly": 50.0,
"monthly": 150.0
}
}
Tool 2: claude-token-counter
claude-token-counter is a lightweight Python tool that hooks into the Anthropic API to provide real-time token tracking.
Installation
pip install claude-token-counter
Usage as a Proxy
claude-token-counter can run as a proxy between Claude Code and the Anthropic API, logging every request:
# Start the logging proxy
claude-token-counter proxy --port 8080
# In another terminal, configure Claude Code to use the proxy
export ANTHROPIC_BASE_URL="http://localhost:8080"
claude # Start Claude Code as normal
Dashboard
# Launch the web dashboard
claude-token-counter dashboard --port 3000
# Open http://localhost:3000 in your browser
The dashboard shows:
- Real-time token consumption graphs
- Per-session cost breakdown
- Model usage distribution (Opus vs Sonnet vs Haiku)
- Cumulative spending trends
- Budget alerts and projections
Programmatic Access
from claude_token_counter import UsageTracker
tracker = UsageTracker()
# Get today's usage
today = tracker.get_usage(period="today")
print(f"Tokens used today: {today.total_tokens}")
print(f"Estimated cost: ${today.estimated_cost:.2f}")
# Get usage by project
projects = tracker.get_usage_by_project(days=30)
for project in projects:
print(f"{project.name}: {project.total_tokens} tokens (${project.cost:.2f})")
Tool 3: Anthropic Usage Dashboard (API-Based)
While not strictly open-source, the Anthropic API provides usage endpoints that you can query with open-source tools.
Query Usage via API
import httpx
from datetime import datetime, timedelta
ANTHROPIC_ADMIN_KEY = "sk-ant-admin-your-key"
async def get_usage(days: int = 7):
"""Fetch usage data from Anthropic's API."""
end_date = datetime.now()
start_date = end_date - timedelta(days=days)
async with httpx.AsyncClient() as client:
response = await client.get(
"https://api.anthropic.com/v1/organizations/usage",
headers={
"x-api-key": ANTHROPIC_ADMIN_KEY,
"anthropic-version": "2024-01-01"
},
params={
"start_date": start_date.strftime("%Y-%m-%d"),
"end_date": end_date.strftime("%Y-%m-%d")
}
)
return response.json()
Build a Custom Dashboard with Grafana
Export Anthropic usage data to Prometheus and visualize with Grafana:
# prometheus_exporter.py
from prometheus_client import start_http_server, Gauge
import time
token_usage = Gauge('claude_tokens_total', 'Total tokens used', ['model', 'direction'])
cost_gauge = Gauge('claude_cost_usd', 'Estimated cost in USD', ['model'])
def update_metrics():
"""Fetch usage from Anthropic and update Prometheus metrics."""
usage = get_usage(days=1) # Use the function above
for model_usage in usage.get("models", []):
model = model_usage["model"]
token_usage.labels(model=model, direction="input").set(model_usage["input_tokens"])
token_usage.labels(model=model, direction="output").set(model_usage["output_tokens"])
cost_gauge.labels(model=model).set(model_usage["estimated_cost"])
if __name__ == "__main__":
start_http_server(9090)
while True:
update_metrics()
time.sleep(60)
Then add a Prometheus scrape target and create Grafana dashboards for visualization.
Tool 4: LLM Cost Calculator CLI
llm-cost is a general-purpose CLI tool that tracks costs across multiple LLM providers, not just Claude.
Installation
# Install via pip
pip install llm-cost-calculator
# Or via Homebrew
brew install llm-cost
Usage
# Scan Claude Code logs and calculate costs
llm-cost scan --provider anthropic --days 30
# Watch Claude Code usage in real-time
llm-cost watch --provider anthropic
# Set a budget alert
llm-cost budget --daily 10 --weekly 50 --monthly 200
# Compare costs across providers
llm-cost compare --task "code review" --providers anthropic,openai,google
Sample Budget Alert Configuration
# ~/.llm-cost/config.yaml
providers:
anthropic:
api_key_env: ANTHROPIC_API_KEY
models:
claude-opus-4-5:
input_cost_per_million: 15.0
output_cost_per_million: 75.0
claude-sonnet-4:
input_cost_per_million: 3.0
output_cost_per_million: 15.0
alerts:
daily_limit: 10.0
weekly_limit: 50.0
monthly_limit: 200.0
notification:
type: "slack"
webhook_url: "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
Tool 5: Claude Code Built-In Usage Tracking
Claude Code itself has built-in usage tracking that many users overlook.
Check Usage Within Claude Code
# Inside a Claude Code session, type:
/usage
# This shows token consumption for the current session
# Including input tokens, output tokens, and cache hits
Read Session Logs Directly
Claude Code stores detailed session logs locally:
# macOS
ls ~/.claude/projects/
# Each session has a JSON log file
# Parse them with jq for custom analysis
cat ~/.claude/projects/*/sessions/*.json | \
jq '[.[] | select(.type == "usage") | .tokens] | add'
Custom Script to Parse Claude Code Logs
#!/usr/bin/env python3
"""Parse Claude Code session logs for usage analytics."""
import json
from pathlib import Path
from datetime import datetime
CLAUDE_DIR = Path.home() / ".claude" / "projects"
def analyze_usage(days: int = 7):
total_input = 0
total_output = 0
sessions = 0
cutoff = datetime.now().timestamp() - (days * 86400)
for session_file in CLAUDE_DIR.rglob("*.json"):
if session_file.stat().st_mtime < cutoff:
continue
try:
data = json.loads(session_file.read_text())
for entry in data:
if entry.get("type") == "usage":
total_input += entry.get("input_tokens", 0)
total_output += entry.get("output_tokens", 0)
sessions += 1
except (json.JSONDecodeError, KeyError):
continue
# Calculate costs (Sonnet 4 rates)
input_cost = (total_input / 1_000_000) * 3.0
output_cost = (total_output / 1_000_000) * 15.0
print(f"Usage Summary (Last {days} days)")
print(f"{'='*40}")
print(f"Sessions: {sessions}")
print(f"Input tokens: {total_input:,}")
print(f"Output tokens: {total_output:,}")
print(f"Input cost: ${input_cost:.2f}")
print(f"Output cost: ${output_cost:.2f}")
print(f"Total cost: ${input_cost + output_cost:.2f}")
if __name__ == "__main__":
analyze_usage(days=30)
Cost Optimization Tips
After monitoring reveals your usage patterns, apply these optimizations:
1. Use the Right Model for Each Task
| Task | Recommended Model | Cost Savings vs Opus |
|---|---|---|
| Code completion | Haiku 3.5 | 95% cheaper |
| Bug fixes (simple) | Sonnet 4 | 80% cheaper |
| Code review | Sonnet 4 | 80% cheaper |
| Architecture design | Opus 4 | Baseline |
| Complex refactoring | Opus 4 | Baseline |
2. Reduce Context Sent to the Model
# Use .claudeignore to exclude irrelevant files
# Create .claudeignore in your project root
echo "node_modules/
dist/
.git/
*.lock
*.log
coverage/
.env" > .claudeignore
3. Set Spending Limits in Anthropic Dashboard
Go to console.anthropic.com/settings/limits and configure:
- Monthly spending cap
- Per-request token limits
- Model access restrictions
4. Cache Prompt Prefixes
Claude Code supports prompt caching, which can reduce input token costs by up to 90% for repeated context:
# Prompt caching is automatic for system prompts and repeated file content
# Ensure your CLAUDE.md file is well-structured so it gets cached efficiently
Comparison Table: All Monitoring Tools
| Tool | Type | Real-Time | Cost Tracking | Alerts | Multi-Provider | Setup |
|---|---|---|---|---|---|---|
| ccusage | CLI | No | Yes | Yes | No (Claude only) | Easy |
| claude-token-counter | Proxy + Dashboard | Yes | Yes | Yes | No (Claude only) | Medium |
| Anthropic API + Grafana | Dashboard | Near real-time | Yes | Yes (custom) | No | Advanced |
| llm-cost | CLI + Config | Yes (watch mode) | Yes | Yes | Yes | Medium |
| Built-in /usage | CLI command | Yes | Basic | No | No | None |
Wrapping Up
Monitoring Claude Code usage is not optional if you want to keep costs predictable. Start with the built-in /usage command for quick checks, then set up ccusage for daily reports and cost tracking. For teams or heavy users, a Grafana dashboard with the Anthropic API gives you enterprise-grade visibility.
The single most impactful optimization is using Sonnet 4 instead of Opus 4 for routine tasks -- it cuts costs by 80% with minimal quality impact for most coding work.
If you are building applications that need AI-generated media alongside your code, Hypereal AI offers transparent per-credit pricing for image, video, and audio generation. No hidden costs, no surprise bills -- just predictable pricing with 35 free credits to start.
Related Articles
Start Building Today
Get 35 free credits on signup. No credit card required. Generate your first image in under 5 minutes.
