A curated collection of configurations, skills and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, skills, agents and custom commands, please refer feiskyer/claude-code-settings.
This repository provides:
- Flexible Configuration: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter, ModelScope, Kimi)
- Custom Prompts: Reusable prompt templates for common development tasks
- Skills (Experimental): Discoverable instruction bundles for specialized tasks (image generation, YouTube transcription, spec-driven workflows)
- Best Practices: Pre-configured settings optimized for development workflows
- Easy Setup: Simple installation and configuration process
# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak
# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex
# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codexThe default config.toml uses LiteLLM as a gateway. To use it:
-
Install LiteLLM and Codex CLI:
pip install -U 'litellm[proxy]' npm install -g @openai/codex -
Create a LiteLLM config file (full example litellm_config.yaml):
general_settings: master_key: sk-dummy litellm_settings: drop_params: true model_list: - model_name: gpt-5 litellm_params: model: github_copilot/gpt-5 extra_headers: editor-version: "vscode/1.104.3" editor-plugin-version: "copilot-chat/0.26.7" Copilot-Integration-Id: "vscode-chat" user-agent: "GitHubCopilotChat/0.26.7" x-github-api-version: "2025-04-01"
-
Start LiteLLM proxy:
litellm --config ~/.codex/litellm_config.yaml # Runs on http://localhost:4000 by default
-
Run Codex:
codex
- config.toml: Default configuration using LiteLLM gateway
- Model:
gpt-5viamodel_provider = "github"(Copilot proxy onhttp://localhost:4000) - Approval policy:
on-request; reasoning summary:detailed; reasoning effort:high; raw agent reasoning visible - MCP servers:
claude(local),exa(hosted),chrome(DevTools overnpx)
- Model:
Located in configs/ directory:
- OpenAI ChatGPT: Use ChatGPT subscription provider
- Azure OpenAI: Use Azure OpenAI service provider
- Github Copilot: Use Github Copilot via LiteLLM proxy
- OpenRouter: Use OpenRouter provider
- Model Scope: Use ModelScope provider
- Kimi: Use Moonshot Kimi provider
To use an alternative config:
# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml ~/.codex/config.toml
codexCustom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
/prompts:deep-reflector- Analyze development sessions to extract learnings, patterns, and improvements for future interactions./prompts:insight-documenter [breakthrough]- Capture and document significant technical breakthroughs into reusable knowledge assets./prompts:instruction-reflector- Analyze and improve Codex instructions in AGENTS.md based on conversation history./prompts:github-issue-fixer [issue-number]- Systematically analyze, plan, and implement fixes for GitHub issues with PR creation./prompts:github-pr-reviewer [pr-number]- Perform thorough GitHub pull request code analysis and review./prompts:ui-engineer [requirements]- Create production-ready frontend solutions with modern UI/UX standards./prompts:prompt-creator [requirements]- Create Codex custom prompts with proper structure and best practices.
- Create a new
.mdfile in~/.codex/prompts/ - Use argument placeholders:
$1to$9: Positional arguments$ARGUMENTS: All arguments joined by spaces$$: Literal dollar sign
- Restart Codex to load new prompts
Skills are reusable instruction bundles that Codex automatically discovers at startup. Each skill has a name, description, and detailed instructions stored on disk. Codex injects only metadata (name, description, path) into context - the body stays on disk until needed.
Skills are automatically loaded when Codex starts. To use a skill:
-
List all skills: Use the
/skillscommand to see all available skills/skills -
Invoke a skill: Use
$<skill-name> [prompt]to invoke a skill with an optional prompt$kiro-skill Create a feature spec for user authentication $nanobanana-skill Generate an image of a sunset over mountains
Skills are stored in ~/.codex/skills/**/SKILL.md. Only files named exactly SKILL.md are recognized.
claude-skill - Handoff task to Claude Code CLI
Non-interactive automation mode for hands-off task execution using Claude Code. Use when you want to leverage Claude Code to implement features or review code.
Key Features:
- Multiple permission modes (default, acceptEdits, plan, bypassPermissions)
- Autonomous execution without approval prompts
- Streaming progress updates
- Structured final summaries
Requirements: Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
autonomous-skill - Long-running task automation
Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.
Warning: workflows may pause when Codex requests permissions. Treat this as experimental; expect to babysit early runs and keep iterating on approvals/sandbox settings.
Key Features:
- Dual-agent pattern (Initializer creates task list, Executor completes tasks)
- Auto-continuation across sessions with progress tracking
- Task isolation with per-task directories (
.autonomous/<task-name>/) - Progress persistence via
task_list.mdandprogress.md - Non-interactive mode execution
Usage:
# Start a new autonomous task
~/.codex/skills/autonomous-skill/scripts/run-session.sh "Build a REST API for todo app"
# Continue an existing task
~/.codex/skills/autonomous-skill/scripts/run-session.sh --task-name build-rest-api-todo --continue
# List all tasks
~/.codex/skills/autonomous-skill/scripts/run-session.sh --listnanobanana-skill - Image generation with Gemini
Generate or edit images using Google Gemini API via nanobanana. Use when creating, generating, or editing images.
Key Features:
- Image generation with various aspect ratios (square, portrait, landscape, ultra-wide)
- Image editing capabilities
- Multiple model options (gemini-3-pro-image-preview, gemini-2.5-flash-image)
- Resolution options (1K, 2K, 4K)
Requirements:
GEMINI_API_KEYconfigured in~/.nanobanana.env- Python3 with google-genai, Pillow, python-dotenv
youtube-transcribe-skill - Extract YouTube subtitles
Extract subtitles/transcripts from a YouTube video URL and save as a local file.
Key Features:
- Dual extraction methods: CLI (
yt-dlp) and Browser Automation (fallback) - Automatic subtitle language selection (zh-Hans, zh-Hant, en)
- Cookie handling for age-restricted content
- Saves transcripts to local text files
Requirements:
yt-dlp(for CLI method), or- Browser automation MCP server (for fallback method)
kiro-skill - Interactive feature development
Interactive feature development workflow from idea to implementation. Creates requirements (EARS format), design documents, and implementation task lists.
Triggered by: "kiro" or references to .kiro/specs/ directory
Workflow:
- Requirements → Define what needs to be built (EARS format with user stories)
- Design → Determine how to build it (architecture, components, data models)
- Tasks → Create actionable implementation steps (test-driven, incremental)
- Execute → Implement tasks one at a time
Storage: Creates files in .kiro/specs/{feature-name}/ directory
spec-kit-skill - Constitution-based development
GitHub Spec-Kit integration for constitution-based spec-driven development.
Triggered by: "spec-kit", "speckit", "constitution", "specify", or references to .specify/ directory
Prerequisites:
# Install spec-kit CLI
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
# Initialize project
specify init . --ai codex7-Phase Workflow:
- Constitution → Establish governing principles
- Specify → Define functional requirements
- Clarify → Resolve ambiguities (max 5 questions)
- Plan → Create technical strategy
- Tasks → Generate dependency-ordered tasks
- Analyze → Validate consistency (read-only)
- Implement → Execute implementation
untrusted: Prompt for untrusted commands (recommended)on-failure: Only prompt when sandbox commands failon-request: Model decides when to asknever: Auto-approve all commands (use with caution)
read-only: Can read files, no writes or networkworkspace-write: Can write to workspace, network configurabledanger-full-access: Full system access (use in containers only)
For reasoning-capable models (o3, gpt-5):
- Effort:
minimal,low,medium,high - Summary:
auto,concise,detailed,none
Control which environment variables are passed to subprocesses:
[shell_environment_policy]
inherit = "all" # all, core, none
exclude = ["AWS_*", "AZURE_*"] # Exclude patterns
set = { CI = "1" } # Force-set valuesDefine multiple configuration profiles:
[profiles.openrouter]
model = "gpt-5"
model_reasoning_effort = "high"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
model_provider = "openrouter"
[profiles.github]
model = "gpt-5"
model_reasoning_effort = "high"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
model_provider = "github"
[model_providers.github]
name = "OpenAI"
base_url = "http://localhost:4000"
http_headers = { "Authorization"= "Bearer sk-dummy"}
wire_api = "chat"
[model_providers.openrouter]
name = "OpenRouter"
base_url = "https://openrouter.ai/api/v1"
http_headers = { "Authorization"= "Bearer [YOUR-API-KEY]"}
wire_api = "chat"Use with: codex --profile openrouter
Extend Codex with Model Context Protocol servers:
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp@latest"]Codex automatically reads AGENTS.md files in your project to understand context. Please always create one in your project root with /init command on your first codex run.
Contributions welcome! Feel free to:
- Add new custom prompts
- Share alternative configurations
- Improve documentation
- Report issues and suggest features
This project is released under MIT License - See LICENSE for details.