mirror of
https://github.com/openai/codex.git
synced 2026-05-02 04:11:39 +03:00
The TUI was showing the raw configured `model_context_window` until the first `TokenCount` event arrived, even though core had already emitted the effective runtime window on `TurnStarted`. This made the footer, status-line context window, and `/status` output briefly inconsistent for models/configs where the effective window differs from the configured value, such as the `gpt-5.4` 1,000,000-token override reported in #13623. Update the TUI to cache `TurnStarted.model_context_window` immediately so pre-token-count displays use the runtime effective window, and add regression coverage for the startup path. --------- Co-authored-by: Charles Cunningham <ccunningham@openai.com> Co-authored-by: Codex <noreply@openai.com>