How to Use GPT-5 Codex with Cursor AI (2026)
Step-by-step integration guide for OpenAI Codex in Cursor
Start Building with Hypereal
Access Kling, Flux, Sora, Veo & more through a single API. Free credits to start, scale to millions.
No credit card required • 100k+ developers • Enterprise ready
How to Use GPT-5 Codex with Cursor AI (2026)
OpenAI's Codex, powered by GPT-5, is one of the most capable coding models available in 2026. When paired with Cursor AI's agentic editor, it creates a powerful development environment for code generation, debugging, and refactoring.
This guide shows you how to set up and use GPT-5 Codex within Cursor AI, including configuration options, best practices, and comparisons with other available models.
What Is GPT-5 Codex?
GPT-5 Codex is OpenAI's dedicated coding model built on the GPT-5 architecture. It is optimized for:
- Code generation across 50+ programming languages
- Multi-file understanding and editing
- Long-context reasoning (up to 200K tokens)
- Executing multi-step coding plans
- Understanding and generating tests, documentation, and deployment configs
| Feature | GPT-5 Codex | GPT-4o | Claude Sonnet 4 |
|---|---|---|---|
| Context window | 200K tokens | 128K tokens | 200K tokens |
| Code optimization | Specialized | General | General |
| Multi-file editing | Excellent | Good | Excellent |
| Speed | Fast | Fast | Fast |
| Cost (approx) | $5/M input, $15/M output | $2.5/M in, $10/M out | $3/M in, $15/M out |
Prerequisites
Before you begin, you need:
- Cursor AI installed (download from cursor.com)
- Cursor Pro subscription ($20/month) for premium model access, or an OpenAI API key
- An OpenAI API key if you want to use your own key instead of Cursor's built-in allocation
Method 1: Use GPT-5 Codex via Cursor's Built-in Models
Cursor Pro includes access to GPT-5 Codex as part of its premium model lineup. This is the easiest approach.
Step 1: Open Model Settings
- Open Cursor
- Press
Cmd+Shift+P(macOS) orCtrl+Shift+P(Windows/Linux) - Type "Cursor Settings" and press Enter
- Navigate to Models
Step 2: Select GPT-5 Codex
In the Models section, you will see a list of available models. Enable gpt-5-codex if it is not already enabled:
Available Models:
[x] claude-sonnet-4 (Anthropic)
[x] claude-opus-4 (Anthropic)
[x] gpt-5-codex (OpenAI) <-- Enable this
[x] gpt-4o (OpenAI)
[x] gemini-2.5-pro (Google)
[ ] deepseek-v3 (DeepSeek)
Step 3: Use in Chat or Composer
Open the Cursor chat panel (Cmd+L) or Composer (Cmd+I) and select gpt-5-codex from the model dropdown at the top of the panel.
Model: gpt-5-codex ▼
─────────────────────
You: Refactor the authentication module to use refresh tokens
Method 2: Use Your Own OpenAI API Key
If you want to bypass Cursor's request limits or use GPT-5 Codex on the free plan, you can bring your own API key.
Step 1: Get an OpenAI API Key
- Go to platform.openai.com
- Navigate to API Keys
- Click Create new secret key
- Copy the key (starts with
sk-)
Step 2: Configure in Cursor
- Open Cursor Settings (
Cmd+Shift+P> "Cursor Settings") - Navigate to Models > OpenAI
- Paste your API key in the OpenAI API Key field
OpenAI Configuration:
API Key: sk-xxxxxxxxxxxxxxxxxxxxxxxx
Base URL: https://api.openai.com/v1 (default)
Step 3: Add GPT-5 Codex as a Custom Model
If gpt-5-codex does not appear in the default list, add it manually:
- In Models settings, click + Add Model
- Enter the model ID:
gpt-5-codex - Set the provider to OpenAI
- Save
You can now select it from any model dropdown in Cursor.
Method 3: Use via OpenRouter (More Model Options)
OpenRouter acts as a proxy that gives you access to multiple providers through a single API key. This is useful if you want to switch between GPT-5 Codex and other models seamlessly.
Step 1: Get an OpenRouter API Key
- Go to openrouter.ai
- Create an account and add credits
- Copy your API key
Step 2: Configure OpenRouter in Cursor
- Open Cursor Settings > Models
- Under OpenAI API Key, enter your OpenRouter key
- Change the Base URL to:
https://openrouter.ai/api/v1
Step 3: Add the Model
Add the OpenRouter model ID:
openai/gpt-5-codex
Now you can access GPT-5 Codex through OpenRouter's proxy, along with any other model they support.
Practical Usage Examples
Example 1: Generate a REST API
Open Composer (Cmd+I) with gpt-5-codex selected:
Prompt: Create a complete REST API for a task management app using Express and TypeScript.
Include:
- CRUD endpoints for tasks
- Input validation with Zod
- Error handling middleware
- SQLite database with Drizzle ORM
- Unit tests with Vitest
GPT-5 Codex will generate a multi-file implementation that you can review and apply.
Example 2: Debug a Complex Issue
In Cursor chat (Cmd+L):
Prompt: I'm getting a race condition in my WebSocket handler.
Multiple clients can update the same resource simultaneously
and the last write wins, causing data loss.
@src/ws/handler.ts
@src/services/resource.ts
Help me implement optimistic concurrency control.
Example 3: Refactor Legacy Code
Prompt: Refactor this jQuery-based module to modern React with TypeScript.
Preserve all existing functionality and add proper types.
@src/legacy/dashboard.js
Example 4: Generate Tests from Implementation
Prompt: Write comprehensive tests for this module.
Cover happy paths, error cases, boundary conditions, and concurrent access.
Use Vitest. Aim for > 90% coverage.
@src/services/payment.ts
Optimizing GPT-5 Codex Performance in Cursor
Use .cursorrules Files
Create a .cursorrules file in your project root to give GPT-5 Codex project-specific context:
You are working on a Next.js 15 application.
Tech stack:
- TypeScript strict mode
- Tailwind CSS v4
- Prisma ORM with PostgreSQL
- NextAuth.js for authentication
- Vitest for testing
Conventions:
- Use server components by default
- Use "use client" only when necessary
- Prefer named exports
- Use Zod for all input validation
- Error boundaries for every route segment
When generating code:
- Always include proper TypeScript types
- Add JSDoc comments for public functions
- Include error handling
- Write tests alongside implementation
Optimize Context Usage
GPT-5 Codex has a 200K token context window, but using it efficiently still matters:
| Technique | How |
|---|---|
| Reference specific files | Use @file to include only relevant files |
| Use folders selectively | @src/services instead of @src |
| Exclude large files | Avoid referencing generated files, lock files, or bundled code |
| Break tasks down | Multiple focused prompts beat one massive prompt |
| Use codebase indexing | Let Cursor's indexer provide automatic context |
Model Switching Strategy
Different models have different strengths. Use Cursor's model dropdown to switch based on the task:
| Task | Recommended Model | Why |
|---|---|---|
| Complex architecture | Claude Opus 4 | Best reasoning |
| Code generation | GPT-5 Codex | Optimized for code |
| Quick edits | GPT-4o | Fast and cheap |
| Long-context tasks | Gemini 2.5 Pro | 1M token window |
| Debugging | Claude Sonnet 4 | Strong analysis |
GPT-5 Codex vs Other Models in Cursor
Here is how GPT-5 Codex compares to other popular models for coding tasks in Cursor:
| Benchmark | GPT-5 Codex | Claude Sonnet 4 | Gemini 2.5 Pro |
|---|---|---|---|
| SWE-Bench (code fix) | 72% | 70% | 65% |
| HumanEval (code gen) | 96% | 94% | 92% |
| Multi-file edit accuracy | Excellent | Excellent | Good |
| Speed (tokens/sec) | ~90 | ~80 | ~100 |
| Cost efficiency | Medium | Medium | High |
Benchmarks are approximate and based on publicly available evaluations.
Troubleshooting
| Issue | Solution |
|---|---|
| Model not appearing | Add it manually in Settings > Models > Add Model |
| "Rate limit exceeded" | Wait a few minutes, or switch to your own API key |
| Slow responses | Check network; try a different model temporarily |
| Poor code quality | Add a .cursorrules file with project context |
| "Invalid API key" | Verify key is correct and has sufficient credits |
| Context too long | Use @file references instead of pasting code |
Conclusion
GPT-5 Codex paired with Cursor AI creates one of the most productive coding setups available in 2026. Whether you use Cursor's built-in allocation, your own OpenAI key, or OpenRouter, the setup is straightforward and the results are impressive for code generation, refactoring, and debugging tasks.
If your project involves AI-generated media such as images, video, or audio, Hypereal AI provides affordable, developer-friendly APIs that you can integrate directly into your Cursor-powered workflow. Build and test your media generation pipelines faster with production-ready endpoints.
Related Articles
Start Building Today
Get 35 free credits on signup. No credit card required. Generate your first image in under 5 minutes.
