OpenAI Codex CLI mastery skill for installation, authentication, terminal commands, code generation from natural language, file editing, project scaffolding, and IDE integrations. Covers GPT-5.2 Instant/Thinking tiers and shell/container execution modes.
Use this skill whenever the user needs to install, configure, or operate the OpenAI Codex CLI — the primary command-line interface for AI-assisted code generation, file editing, project scaffolding, and developer workflow automation powered by GPT-5.2.
Important migration notice (Feb 13, 2026): GPT-4o has been officially retired. All Codex CLI users must migrate to GPT-5.2 model tiers. Legacy
--model gpt-4oflags will return errors.
# Install Codex CLI globally via npm
npm install -g @openai/codex
# Verify installation
codex --version
# Update to latest version
npm update -g @openai/codex
# Alternative: install via npx (no global install)
npx @openai/codex --help
System requirements:
# Set API key via environment variable (recommended)
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# On Windows PowerShell
$env:OPENAI_API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# On Windows CMD
set OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Interactive login (stores key in ~/.codex/config.json)
codex auth login
# Verify authentication
codex auth whoami
# Use a specific organization
codex auth login --org org-xxxxxxxxxxxx
# Logout / clear stored credentials
codex auth logout
Persistent configuration (~/.codex/config.json):
{
"api_key": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"organization": "org-xxxxxxxxxxxx",
"default_model": "gpt-5.2-instant",
"default_mode": "local"
}
Codex CLI operates on two GPT-5.2 tiers introduced after the GPT-4o retirement on Feb 13, 2026:
| Tier | Model ID | Use Case | Latency | Cost |
|---|---|---|---|---|
| Instant | gpt-5.2-instant | Fast completions, high-volume tasks, simple edits | ~200ms | Lower |
| Thinking | gpt-5.2-thinking | Deep reasoning, complex architecture, multi-file refactoring | ~2-8s | Higher |
# Use Instant tier (default — fast, high-volume)
codex "add input validation to app.py" --model instant
# Use Thinking tier (deep reasoning for complex tasks)
codex "redesign the authentication module with OAuth2 and PKCE flow" --model thinking
# Set default tier
codex config set model thinking
Tier selection guidance:
# Generate a complete file
codex "create a FastAPI server with CRUD endpoints for a todo app"
# Generate and write directly to a file
codex "create a Python dataclass for User with validation" -o models/user.py
# Generate with specific language
codex "implement binary search" --lang rust
# Generate with context from existing files
codex "add unit tests for this module" --context src/utils.py
# Multi-file generation
codex "create a React component with tests and Storybook story for a DatePicker" --multi
# Pipe input for transformation
cat legacy_code.py | codex "refactor this to use modern Python 3.12 patterns"
# Generate with specific framework constraints
codex "create an Express.js middleware for rate limiting" --framework express
# Edit an existing file with natural language instructions
codex edit src/app.py "add error handling to all database calls"
# Edit multiple files at once
codex edit src/**/*.ts "convert all var declarations to const/let"
# Preview changes before applying (dry run)
codex edit src/app.py "add logging" --dry-run
# Edit with diff output
codex edit src/app.py "optimize the search function" --diff
# Interactive edit mode (review each change)
codex edit src/app.py "refactor for readability" --interactive
# Undo last edit
codex edit --undo
# Scaffold a new project interactively
codex init
# Scaffold with a specific template
codex init --template react-typescript
# Scaffold from a description
codex init "a full-stack Next.js app with Prisma ORM, PostgreSQL, and Tailwind CSS"
# Scaffold with specific options
codex init "REST API" --lang python --framework fastapi --db postgresql --auth jwt
# Scaffold and install dependencies automatically
codex init "Vue 3 dashboard" --install
# Available built-in templates
codex init --list-templates
Commands execute directly in the user's local terminal environment:
# Explicit local mode
codex "list all TODO comments in the project" --mode local
# Local mode has full access to:
# - Local file system
# - Installed CLI tools (git, docker, npm, etc.)
# - Environment variables
# - Running services (databases, servers)
Commands execute in an isolated cloud container:
# Run in hosted container (sandboxed environment)
codex "run the full test suite and fix any failures" --mode container
# Container mode provides:
# - Isolated execution (safe for untrusted operations)
# - Pre-configured environments (Node, Python, Rust, Go, etc.)
# - No impact on local system
# - Reproducible results
# Specify container image
codex "build and test the project" --mode container --image node:20-alpine
# Container with GPU access (for ML tasks)
codex "train the model on sample data" --mode container --gpu
# Install the Codex VS Code extension
code --install-extension openai.codex-vscode
# Or search "OpenAI Codex" in VS Code Extensions marketplace
VS Code keybindings:
Ctrl+Shift+C / Cmd+Shift+C: Open Codex command paletteCtrl+K Ctrl+G / Cmd+K Cmd+G: Generate code at cursorCtrl+K Ctrl+E / Cmd+K Cmd+E: Edit selected codeCtrl+K Ctrl+X / Cmd+K Cmd+X: Explain selected codeVS Code settings (settings.json):
{
"codex.model": "gpt-5.2-instant",
"codex.mode": "local",
"codex.autoSuggest": true,
"codex.inlineSuggestions": true,
"codex.suggestDelay": 300,
"codex.contextFiles": 5,
"codex.telemetry": false
}
# Install via JetBrains Marketplace
# Settings → Plugins → Search "OpenAI Codex" → Install
# Or install from CLI
idea installPlugins openai.codex-jetbrains
-- In init.lua or after/plugin/codex.lua
require('codex').setup({
api_key = vim.env.OPENAI_API_KEY,
model = 'gpt-5.2-instant',
keymaps = {
generate = '<leader>cg',
edit = '<leader>ce',
explain = '<leader>cx',
},
})
# Process multiple files with the same instruction
codex batch "add JSDoc comments to all exported functions" --glob "src/**/*.ts"
# Batch with concurrency control
codex batch "add type annotations" --glob "src/**/*.py" --concurrency 4
# Batch with progress reporting
codex batch "migrate from CommonJS to ESM" --glob "src/**/*.js" --progress
# Chain with other CLI tools
git diff HEAD~1 | codex "summarize these changes for a commit message"
# Generate and pipe to clipboard
codex "regex for email validation" | pbcopy
# Use in scripts
TESTS=$(codex "generate pytest tests for $(cat src/utils.py)" --raw)
echo "$TESTS" > tests/test_utils.py
# Create a named profile
codex config profile create work --model thinking --mode container
# Switch profiles
codex config profile use work
# List profiles
codex config profile list
--context to include relevant files so Codex understands your codebase patterns.--dry-run before applying to production code.codex auth login.--model in automation scripts.npm update -g @openai/codex regularly for latest features and model improvements.# User reports a bug in the payment module
codex edit src/payments/processor.py \
"fix the race condition in process_payment where concurrent requests can double-charge" \
--model instant --diff
# Deep reasoning for complex refactoring
codex "analyze the current monolithic architecture in src/ and propose a microservices \
decomposition plan with service boundaries, API contracts, and migration steps" \
--model thinking -o architecture-plan.md
# Scaffold a production-ready project
codex init "SaaS dashboard with Next.js 15, tRPC, Prisma, PostgreSQL, \
Stripe billing, NextAuth.js, Tailwind CSS, and Playwright tests" \
--install --git-init
# .github/workflows/codex-review.yml