OpenAI Codex Usage Limits Explained (2026)
Understand rate limits, quotas, and how to increase them
Start Building with Hypereal
Access Kling, Flux, Sora, Veo & more through a single API. Free credits to start, scale to millions.
No credit card required • 100k+ developers • Enterprise ready
OpenAI Codex Usage Limits Explained (2026)
OpenAI Codex is a powerful agentic coding tool, but it comes with usage limits that can be confusing. Whether you are using Codex through ChatGPT or the API, understanding these limits helps you plan your workflow and avoid hitting walls mid-project.
This guide explains every limit, how they work, and practical strategies to work within them or request increases.
Codex Limits by Plan
ChatGPT Plan Limits
If you access Codex through ChatGPT, your limits depend on your subscription tier.
| Limit Type | Plus ($20/mo) | Pro ($200/mo) | Team ($25/user/mo) | Enterprise |
|---|---|---|---|---|
| Tasks per day | ~25 | ~250 | ~100 | Custom |
| Max files per task | 50 | 200 | 100 | Custom |
| Task timeout | 10 minutes | 30 minutes | 15 minutes | Custom |
| Concurrent tasks | 1 | 3 | 2 | Custom |
| Repository size | Up to 500 MB | Up to 2 GB | Up to 1 GB | Custom |
| Model access | codex-mini | codex-mini + codex | codex-mini + codex | All |
A few important notes:
- Task limits reset daily at midnight UTC.
- The ~25 task limit on Plus is approximate. OpenAI uses a "compute-based" limit, so simple tasks consume less of your quota than complex ones.
- Concurrent tasks refers to how many Codex tasks you can run simultaneously. Plus users can only run one at a time.
API Rate Limits
If you access Codex through the OpenAI API, you face different constraints.
| Limit Type | Tier 1 (New) | Tier 2 | Tier 3 | Tier 4 | Tier 5 |
|---|---|---|---|---|---|
| RPM (requests/min) | 60 | 100 | 300 | 800 | 2,000 |
| TPM (tokens/min) | 60,000 | 200,000 | 1,000,000 | 5,000,000 | 10,000,000 |
| RPD (requests/day) | 1,000 | 5,000 | 15,000 | 50,000 | 150,000 |
| Monthly spend to qualify | $5 | $50 | $100 | $250 | $1,000 |
How API Tiers Work
OpenAI automatically assigns you a tier based on your cumulative spending and account age:
- Tier 1: New accounts with at least $5 in payments
- Tier 2: $50+ spent, account at least 7 days old
- Tier 3: $100+ spent, account at least 30 days old
- Tier 4: $250+ spent, account at least 30 days old
- Tier 5: $1,000+ spent, account at least 30 days old
You move up automatically. There is no application process for standard tier upgrades.
Understanding Token Limits
Every Codex interaction has token limits that affect how much context the model can process.
| Model | Context Window | Max Output Tokens |
|---|---|---|
| codex-mini | 200,000 tokens | 16,384 tokens |
| codex | 200,000 tokens | 16,384 tokens |
What affects token usage
- Your code files. Every file Codex reads counts as input tokens. A typical 200-line Python file is about 2,000-3,000 tokens.
- Your instruction. Your task description counts as input tokens.
- Generated code. The code Codex writes counts as output tokens.
- Internal reasoning. Codex's chain-of-thought reasoning (visible in the task log) counts as output tokens.
Estimating tokens for a task
Here is a rough guide for typical Codex tasks:
| Task Complexity | Input Tokens | Output Tokens | Total Tokens |
|---|---|---|---|
| Simple (fix a bug in one file) | 3,000-5,000 | 500-1,500 | 3,500-6,500 |
| Medium (add a feature across 2-3 files) | 10,000-30,000 | 2,000-5,000 | 12,000-35,000 |
| Complex (refactor a module, write tests) | 30,000-80,000 | 5,000-15,000 | 35,000-95,000 |
How to Check Your Current Usage
ChatGPT
- Open ChatGPT and navigate to Settings > Subscription.
- Your current usage and remaining tasks are shown under the Codex section.
- You can also see a "Usage" indicator in the Codex interface itself.
API
Check your usage programmatically or through the dashboard.
import openai
# Check rate limit headers from any API response
response = openai.chat.completions.create(
model="codex-mini",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=10
)
# Rate limit info is in the response headers
# x-ratelimit-limit-requests: Your RPM limit
# x-ratelimit-remaining-requests: Remaining requests
# x-ratelimit-reset-requests: Time until reset
You can also check your usage dashboard at platform.openai.com/usage.
How to Increase Your Limits
Option 1: Upgrade Your ChatGPT Plan
The simplest way to increase Codex limits is to upgrade from Plus to Pro.
| Metric | Plus | Pro (10x upgrade) |
|---|---|---|
| Daily tasks | ~25 | ~250 |
| Concurrent tasks | 1 | 3 |
| Task timeout | 10 min | 30 min |
| Price | $20/mo | $200/mo |
Option 2: Move Up API Tiers
Increase your cumulative spend to automatically unlock higher tiers. The jump from Tier 1 to Tier 2 (just $50 total spend) gives you significant increases:
- RPM: 60 -> 100 (67% increase)
- TPM: 60,000 -> 200,000 (233% increase)
- RPD: 1,000 -> 5,000 (400% increase)
Option 3: Request a Custom Rate Limit Increase
For API users who need limits beyond Tier 5, OpenAI offers a rate limit increase request form.
- Go to platform.openai.com/settings/organization/limits.
- Click "Request rate limit increase."
- Fill out the form with your use case, expected volume, and business details.
- OpenAI typically responds within 2-7 business days.
Tips for approval:
- Explain your business use case clearly.
- Provide estimated daily/monthly token usage.
- Mention if you are building a product that integrates Codex.
- Higher-spending accounts get priority.
Option 4: Use Multiple API Keys
For organizations, you can create multiple API keys across separate projects. Each project gets its own rate limits. This is useful for teams where different projects have different usage patterns.
import openai
# Project A - main product
client_a = openai.OpenAI(
api_key="sk-proj-A-...",
)
# Project B - internal tools
client_b = openai.OpenAI(
api_key="sk-proj-B-...",
)
Strategies for Working Within Limits
1. Prioritize your tasks
Not every coding task needs Codex. Use it for complex, multi-step tasks and handle simple edits manually.
Good use of Codex tasks:
- "Refactor the payment module from callbacks to async/await and update all tests"
- "Add comprehensive error handling to every API endpoint in the routes/ directory"
Better done manually:
- Fixing a typo
- Changing a variable name
- Updating a version number
2. Optimize your repository structure
Create a .codexignore file to exclude unnecessary files from context:
# .codexignore - reduce token usage
node_modules/
dist/
build/
.git/
*.lock
*.map
coverage/
__pycache__/
.next/
vendor/
3. Write detailed instructions
Clear instructions help Codex complete tasks on the first attempt, avoiding the need to retry (which burns additional tasks from your quota).
Bad: "Fix the auth bug"
Good: "In src/auth/middleware.ts, the JWT verification on line 34 throws
an unhandled exception when the token is expired. Wrap it in a try-catch
that returns a 401 response with the message 'Token expired'. Add a unit
test in tests/auth.test.ts that verifies this behavior."
4. Use codex-mini for straightforward tasks
Reserve the full codex model for complex tasks that need deeper reasoning. The codex-mini model handles most single-file changes well and counts less against your compute-based quota.
5. Queue tasks efficiently
On the Pro plan, you can run 3 concurrent tasks. Queue your tasks strategically:
- Start a complex refactoring task first (takes longer).
- While it runs, submit two simpler tasks that complete quickly.
- Review results as they finish instead of waiting sequentially.
Common Error Messages and Fixes
| Error | Meaning | Fix |
|---|---|---|
Rate limit exceeded |
Hit RPM or TPM limit | Wait and retry, or upgrade tier |
Task limit reached |
Daily task quota exhausted | Wait until midnight UTC, or upgrade plan |
Context length exceeded |
Too many tokens in context | Use .codexignore, reduce file scope |
Task timed out |
Task exceeded time limit | Break into smaller tasks |
Concurrent task limit |
Too many parallel tasks | Wait for current task to finish |
Repository too large |
Repo exceeds size limit | Exclude large files/directories |
Frequently Asked Questions
Do unused tasks roll over to the next day? No. Daily task limits reset at midnight UTC and do not accumulate.
Does the "thinking" process count toward token limits? Yes. Codex's internal reasoning tokens count as output tokens for API billing. In ChatGPT, they count toward your compute-based task quota.
Can I see how many tasks I have remaining?
In ChatGPT, yes -- the Codex interface shows your remaining quota. On the API, check the x-ratelimit-remaining-* response headers.
What happens if I exceed rate limits on the API? You receive a 429 (Too Many Requests) error. Implement exponential backoff in your code to handle this gracefully.
Are Codex limits separate from ChatGPT message limits? Yes. Codex has its own task quota that is separate from your ChatGPT conversation message limits.
Wrapping Up
OpenAI Codex limits vary significantly by plan -- from ~25 daily tasks on Plus to virtually unlimited on Enterprise. For most individual developers, the Plus plan is sufficient. If you consistently hit limits, upgrading to Pro or moving to the API with pay-per-token pricing gives you more flexibility.
If you are building AI-powered applications and need affordable media generation alongside your coding workflow, try Hypereal AI free -- 35 credits, no credit card required. The API is simple to integrate and works well with projects built using Codex or any other AI coding tool.
Related Articles
Start Building Today
Get 35 free credits on signup. No credit card required. Generate your first image in under 5 minutes.
