← Back to essays

Tip: enable 1M context in Codex with one command

Tip: enable 1M context in Codex with one command

if you want the fast path, run this:

shcurl -fsSL https://blog.yigitkonur.com/enable-codex-1m-context.sh | bash

the script creates a timestamped backup of ~/.codex/config.toml, checks whether your config is missing, partial, outdated, or already correct, and then applies the smallest update that gets you to the 1M settings.

if you want to do it manually instead, add these two lines to ~/.codex/config.toml:

tomlmodel_context_window = 1000000
model_auto_compact_token_limit = 900000

then restart Codex.

that’s it. no long context-window theory, just the switch you need.

OpenAI’s March 5, 2026 GPT-5.4 launch post says Codex has experimental support for the 1M window, and the GPT-5.4 model page lists a 1,050,000-token context window. the practical part is simply enabling it in your local config.