## Why Enterprises can already constrain approvals, sandboxing, and web search through `requirements.toml` and MDM, but feature flags were still only configurable as managed defaults. That meant an enterprise could suggest feature values, but it could not actually pin them. This change closes that gap and makes enterprise feature requirements behave like the other constrained settings. The effective feature set now stays consistent with enterprise requirements during config load, when config writes are validated, and when runtime code mutates feature flags later in the session. It also tightens the runtime API for managed features. `ManagedFeatures` now follows the same constraint-oriented shape as `Constrained<T>` instead of exposing panic-prone mutation helpers, and production code can no longer construct it through an unconstrained `From<Features>` path. The PR also hardens the `compact_resume_fork` integration coverage on Windows. After the feature-management changes, `compact_resume_after_second_compaction_preserves_history` was overflowing the libtest/Tokio thread stacks on Windows, so the test now uses an explicit larger-stack harness as a pragmatic mitigation. That may not be the ideal root-cause fix, and it merits a parallel investigation into whether part of the async future chain should be boxed to reduce stack pressure instead. ## What Changed Enterprises can now pin feature values in `requirements.toml` with the requirements-side `features` table: ```toml [features] personality = true unified_exec = false ``` Only canonical feature keys are allowed in the requirements `features` table; omitted keys remain unconstrained. - Added a requirements-side pinned feature map to `ConfigRequirementsToml`, threaded it through source-preserving requirements merge and normalization in `codex-config`, and made the TOML surface use `[features]` (while still accepting legacy `[feature_requirements]` for compatibility). - Exposed `featureRequirements` from `configRequirements/read`, regenerated the JSON/TypeScript schema artifacts, and updated the app-server README. - Wrapped the effective feature set in `ManagedFeatures`, backed by `ConstrainedWithSource<Features>`, and changed its API to mirror `Constrained<T>`: `can_set(...)`, `set(...) -> ConstraintResult<()>`, and result-returning `enable` / `disable` / `set_enabled` helpers. - Removed the legacy-usage and bulk-map passthroughs from `ManagedFeatures`; callers that need those behaviors now mutate a plain `Features` value and reapply it through `set(...)`, so the constrained wrapper remains the enforcement boundary. - Removed the production loophole for constructing unconstrained `ManagedFeatures`. Non-test code now creates it through the configured feature-loading path, and `impl From<Features> for ManagedFeatures` is restricted to `#[cfg(test)]`. - Rejected legacy feature aliases in enterprise feature requirements, and return a load error when a pinned combination cannot survive dependency normalization. - Validated config writes against enterprise feature requirements before persisting changes, including explicit conflicting writes and profile-specific feature states that normalize into invalid combinations. - Updated runtime and TUI feature-toggle paths to use the constrained setter API and to persist or apply the effective post-constraint value rather than the requested value. - Updated the `core_test_support` Bazel target to include the bundled core model-catalog fixtures in its runtime data, so helper code that resolves `core/models.json` through runfiles works in remote Bazel test environments. - Renamed the core config test coverage to emphasize that effective feature values are normalized at runtime, while conflicting persisted config writes are rejected. - Ran `compact_resume_after_second_compaction_preserves_history` inside an explicit 8 MiB test thread and Tokio runtime worker stack, following the existing larger-stack integration-test pattern, to keep the Windows `compact_resume_fork` test slice from aborting while a parallel investigation continues into whether some of the underlying async futures should be boxed. ## Verification - `cargo test -p codex-config` - `cargo test -p codex-core feature_requirements_ -- --nocapture` - `cargo test -p codex-core load_requirements_toml_produces_expected_constraints -- --nocapture` - `cargo test -p codex-core compact_resume_after_second_compaction_preserves_history -- --nocapture` - `cargo test -p codex-core compact_resume_fork -- --nocapture` - Re-ran the built `codex-core` `tests/all` binary with `RUST_MIN_STACK=262144` for `compact_resume_after_second_compaction_preserves_history` to confirm the explicit-stack harness fixes the deterministic low-stack repro. - `cargo test -p codex-core` - This still fails locally in unrelated integration areas that expect the `codex` / `test_stdio_server` binaries or hit existing `search_tool` wiremock mismatches. ## Docs `developers.openai.com/codex` should document the requirements-side `[features]` table for enterprise and MDM-managed configuration, including that it only accepts canonical feature keys and that conflicting config writes are rejected.
codex-app-server
codex app-server is the interface Codex uses to power rich interfaces such as the Codex VS Code extension.
Table of Contents
- Protocol
- Message Schema
- Core Primitives
- Lifecycle Overview
- Initialization
- API Overview
- Events
- Approvals
- Skills
- Apps
- Auth endpoints
- Experimental API Opt-in
Protocol
Similar to MCP, codex app-server supports bidirectional communication using JSON-RPC 2.0 messages (with the "jsonrpc":"2.0" header omitted on the wire).
Supported transports:
- stdio (
--listen stdio://, default): newline-delimited JSON (JSONL) - websocket (
--listen ws://IP:PORT): one JSON-RPC message per websocket text frame (experimental / unsupported)
Websocket transport is currently experimental and unsupported. Do not rely on it for production workloads.
Tracing/log output:
RUST_LOGcontrols log filtering/verbosity.- Set
LOG_FORMAT=jsonto emit app-server tracing logs tostderras JSON (one event per line).
Backpressure behavior:
- The server uses bounded queues between transport ingress, request processing, and outbound writes.
- When request ingress is saturated, new requests are rejected with a JSON-RPC error code
-32001and message"Server overloaded; retry later.". - Clients should treat this as retryable and use exponential backoff with jitter.
Message Schema
Currently, you can dump a TypeScript version of the schema using codex app-server generate-ts, or a JSON Schema bundle via codex app-server generate-json-schema. Each output is specific to the version of Codex you used to run the command, so the generated artifacts are guaranteed to match that version.
codex app-server generate-ts --out DIR
codex app-server generate-json-schema --out DIR
Core Primitives
The API exposes three top level primitives representing an interaction between a user and Codex:
- Thread: A conversation between a user and the Codex agent. Each thread contains multiple turns.
- Turn: One turn of the conversation, typically starting with a user message and finishing with an agent message. Each turn contains multiple items.
- Item: Represents user inputs and agent outputs as part of the turn, persisted and used as the context for future conversations. Example items include user message, agent reasoning, agent message, shell command, file edit, etc.
Use the thread APIs to create, list, or archive conversations. Drive a conversation with turn APIs and stream progress via turn notifications.
Lifecycle Overview
- Initialize once per connection: Immediately after opening a transport connection, send an
initializerequest with your client metadata, then emit aninitializednotification. Any other request on that connection before this handshake gets rejected. - Start (or resume) a thread: Call
thread/startto open a fresh conversation. The response returns the thread object and you’ll also get athread/startednotification. If you’re continuing an existing conversation, callthread/resumewith its ID instead. If you want to branch from an existing conversation, callthread/forkto create a new thread id with copied history. The returnedthread.ephemeralflag tells you whether the session is intentionally in-memory only; when it istrue,thread.pathisnull. - Begin a turn: To send user input, call
turn/startwith the targetthreadIdand the user's input. Optional fields let you override model, cwd, sandbox policy, etc. This immediately returns the new turn object. The app-server emitsturn/startedwhen that turn actually begins running. - Stream events: After
turn/start, keep reading JSON-RPC notifications on stdout. You’ll seeitem/started,item/completed, deltas likeitem/agentMessage/delta, tool progress, etc. These represent streaming model output plus any side effects (commands, tool calls, reasoning notes). - Finish the turn: When the model is done (or the turn is interrupted via making the
turn/interruptcall), the server sendsturn/completedwith the final turn state and token usage.
Initialization
Clients must send a single initialize request per transport connection before invoking any other method on that connection, then acknowledge with an initialized notification. The server returns the user agent string it will present to upstream services; subsequent requests issued before initialization receive a "Not initialized" error, and repeated initialize calls on the same connection receive an "Already initialized" error.
initialize.params.capabilities also supports per-connection notification opt-out via optOutNotificationMethods, which is a list of exact method names to suppress for that connection. Matching is exact (no wildcards/prefixes). Unknown method names are accepted and ignored.
Applications building on top of codex app-server should identify themselves via the clientInfo parameter.
Important: clientInfo.name is used to identify the client for the OpenAI Compliance Logs Platform. If
you are developing a new Codex integration that is intended for enterprise use, please contact us to get it
added to a known clients list. For more context: https://chatgpt.com/admin/api-reference#tag/Logs:-Codex
Example (from OpenAI's official VSCode extension):
{
"method": "initialize",
"id": 0,
"params": {
"clientInfo": {
"name": "codex_vscode",
"title": "Codex VS Code Extension",
"version": "0.1.0"
}
}
}
Example with notification opt-out:
{
"method": "initialize",
"id": 1,
"params": {
"clientInfo": {
"name": "my_client",
"title": "My Client",
"version": "0.1.0"
},
"capabilities": {
"experimentalApi": true,
"optOutNotificationMethods": [
"codex/event/session_configured",
"item/agentMessage/delta"
]
}
}
}
API Overview
thread/start— create a new thread; emitsthread/started(including the currentthread.status) and auto-subscribes you to turn/item events for that thread.thread/resume— reopen an existing thread by id so subsequentturn/startcalls append to it.thread/fork— fork an existing thread into a new thread id by copying the stored history; emitsthread/started(including the currentthread.status) and auto-subscribes you to turn/item events for the new thread.thread/list— page through stored rollouts; supports cursor-based pagination and optionalmodelProviders,sourceKinds,archived,cwd, andsearchTermfilters. Each returnedthreadincludesstatus(ThreadStatus), defaulting tonotLoadedwhen the thread is not currently loaded.thread/loaded/list— list the thread ids currently loaded in memory.thread/read— read a stored thread by id without resuming it; optionally include turns viaincludeTurns. The returnedthreadincludesstatus(ThreadStatus), defaulting tonotLoadedwhen the thread is not currently loaded.thread/metadata/update— patch stored thread metadata in sqlite; currently supports updating persistedgitInfofields and returns the refreshedthread.thread/status/changed— notification emitted when a loaded thread’s status changes (threadId+ newstatus).thread/archive— move a thread’s rollout file into the archived directory; returns{}on success and emitsthread/archived.thread/unsubscribe— unsubscribe this connection from thread turn/item events. If this was the last subscriber, the server shuts down and unloads the thread, then emitsthread/closed.thread/name/set— set or update a thread’s user-facing name for either a loaded thread or a persisted rollout; returns{}on success. Thread names are not required to be unique; name lookups resolve to the most recently updated thread.thread/unarchive— move an archived rollout file back into the sessions directory; returns the restoredthreadon success and emitsthread/unarchived.thread/compact/start— trigger conversation history compaction for a thread; returns{}immediately while progress streams through standard turn/item notifications.thread/backgroundTerminals/clean— terminate all running background terminals for a thread (experimental; requirescapabilities.experimentalApi); returns{}when the cleanup request is accepted.thread/rollback— drop the last N turns from the agent’s in-memory context and persist a rollback marker in the rollout so future resumes see the pruned history; returns the updatedthread(withturnspopulated) on success.turn/start— add user input to a thread and begin Codex generation; responds with the initialturnobject and streamsturn/started,item/*, andturn/completednotifications. ForcollaborationMode,settings.developer_instructions: nullmeans "use built-in instructions for the selected mode".turn/steer— add user input to an already in-flight turn without starting a new turn; returns the activeturnIdthat accepted the input.turn/interrupt— request cancellation of an in-flight turn by(thread_id, turn_id); success is an empty{}response and the turn finishes withstatus: "interrupted".thread/realtime/start— start a thread-scoped realtime session (experimental); returns{}and streamsthread/realtime/*notifications.thread/realtime/appendAudio— append an input audio chunk to the active realtime session (experimental); returns{}.thread/realtime/appendText— append text input to the active realtime session (experimental); returns{}.thread/realtime/stop— stop the active realtime session for the thread (experimental); returns{}.review/start— kick off Codex’s automated reviewer for a thread; responds liketurn/startand emitsitem/started/item/completednotifications withenteredReviewModeandexitedReviewModeitems, plus a final assistantagentMessagecontaining the review.command/exec— run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).model/list— list available models (setincludeHidden: trueto include entries withhidden: true), with reasoning effort options, optional legacyupgrademodel ids, optionalupgradeInfometadata (model,upgradeCopy,modelLink,migrationMarkdown), and optionalavailabilityNuxmetadata.experimentalFeature/list— list feature flags with stage metadata (beta,underDevelopment,stable, etc.), enabled/default-enabled state, and cursor pagination. For non-beta flags,displayName/description/announcementarenull.collaborationMode/list— list available collaboration mode presets (experimental, no pagination). This response omits built-in developer instructions; clients should either passsettings.developer_instructions: nullwhen setting a mode to use Codex's built-in instructions, or provide their own instructions explicitly.skills/list— list skills for one or morecwdvalues (optionalforceReload).skills/changed— notification emitted when watched local skill files change.skills/remote/list— list public remote skills (under development; do not call from production clients yet).skills/remote/export— download a remote skill byhazelnutIdintoskillsundercodex_home(under development; do not call from production clients yet).app/list— list available apps.skills/config/write— write user-level skill config by path.mcpServer/oauth/login— start an OAuth login for a configured MCP server; returns anauthorization_urland later emitsmcpServer/oauthLogin/completedonce the browser flow finishes.tool/requestUserInput— prompt the user with 1–3 short questions for a tool call and return their answers (experimental).config/mcpServer/reload— reload MCP server config from disk and queue a refresh for loaded threads (applied on each thread's next active turn); returns{}. Use this after editingconfig.tomlwithout restarting the server.mcpServerStatus/list— enumerate configured MCP servers with their tools, resources, resource templates, and auth status; supports cursor+limit pagination.windowsSandbox/setupStart— start Windows sandbox setup for the selected mode (elevatedorunelevated); returns{ started: true }immediately and later emitswindowsSandbox/setupCompleted.feedback/upload— submit a feedback report (classification + optional reason/logs, conversation_id, and optionalextraLogFilesattachments array); returns the tracking thread id.command/exec— run a single command under the server sandbox without starting a thread/turn (handy for utilities and validation).config/read— fetch the effective config on disk after resolving config layering.externalAgentConfig/detect— detect migratable external-agent artifacts withincludeHomeand optionalcwds; each detected item includescwd(nullfor home).externalAgentConfig/import— apply selected external-agent migration items by passing explicitmigrationItemswithcwd(nullfor home).config/value/write— write a single config key/value to the user's config.toml on disk.config/batchWrite— apply multiple config edits atomically to the user's config.toml on disk.configRequirements/read— fetch loaded requirements constraints fromrequirements.tomland/or MDM (ornullif none are configured), including allow-lists (allowedApprovalPolicies,allowedSandboxModes,allowedWebSearchModes), pinned feature values (featureRequirements),enforceResidency, andnetworkconstraints.
Example: Start or resume a thread
Start a fresh thread when you need a new Codex conversation.
{ "method": "thread/start", "id": 10, "params": {
// Optionally set config settings. If not specified, will use the user's
// current config settings.
"model": "gpt-5.1-codex",
"cwd": "/Users/me/project",
"approvalPolicy": "never",
"sandbox": "workspaceWrite",
"personality": "friendly",
"serviceName": "my_app_server_client", // optional metrics tag (`service_name`)
// Experimental: requires opt-in
"dynamicTools": [
{
"name": "lookup_ticket",
"description": "Fetch a ticket by id",
"inputSchema": {
"type": "object",
"properties": {
"id": { "type": "string" }
},
"required": ["id"]
}
}
],
} }
{ "id": 10, "result": {
"thread": {
"id": "thr_123",
"preview": "",
"modelProvider": "openai",
"createdAt": 1730910000
}
} }
{ "method": "thread/started", "params": { "thread": { … } } }
Valid personality values are "friendly", "pragmatic", and "none". When "none" is selected, the personality placeholder is replaced with an empty string.
To continue a stored session, call thread/resume with the thread.id you previously recorded. The response shape matches thread/start, and no additional notifications are emitted. You can also pass the same configuration overrides supported by thread/start, such as personality:
{ "method": "thread/resume", "id": 11, "params": {
"threadId": "thr_123",
"personality": "friendly"
} }
{ "id": 11, "result": { "thread": { "id": "thr_123", … } } }
To branch from a stored session, call thread/fork with the thread.id. This creates a new thread id and emits a thread/started notification for it:
{ "method": "thread/fork", "id": 12, "params": { "threadId": "thr_123" } }
{ "id": 12, "result": { "thread": { "id": "thr_456", … } } }
{ "method": "thread/started", "params": { "thread": { … } } }
Experimental API: thread/start, thread/resume, and thread/fork accept persistExtendedHistory: true to persist a richer subset of ThreadItems for non-lossy history when calling thread/read, thread/resume, and thread/fork later. This does not backfill events that were not persisted previously.
Example: List threads (with pagination & filters)
thread/list lets you render a history UI. Results default to createdAt (newest first) descending. Pass any combination of:
cursor— opaque string from a prior response; omit for the first page.limit— server defaults to a reasonable page size if unset.sortKey—created_at(default) orupdated_at.modelProviders— restrict results to specific providers; unset, null, or an empty array will include all providers.sourceKinds— restrict results to specific sources; omit or pass[]for interactive sessions only (cli,vscode).archived— whentrue, list archived threads only. Whenfalseornull, list non-archived threads (default).cwd— restrict results to threads whose session cwd exactly matches this path.searchTerm— restrict results to threads whose extracted title contains this substring (case-sensitive).- Responses include
agentNicknameandagentRolefor AgentControl-spawned thread sub-agents when available.
Example:
{ "method": "thread/list", "id": 20, "params": {
"cursor": null,
"limit": 25,
"sortKey": "created_at"
} }
{ "id": 20, "result": {
"data": [
{ "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111, "updatedAt": 1730831111, "status": { "type": "notLoaded" }, "agentNickname": "Atlas", "agentRole": "explorer" },
{ "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000, "updatedAt": 1730750000, "status": { "type": "notLoaded" } }
],
"nextCursor": "opaque-token-or-null"
} }
When nextCursor is null, you’ve reached the final page.
Example: List loaded threads
thread/loaded/list returns thread ids currently loaded in memory. This is useful when you want to check which sessions are active without scanning rollouts on disk.
{ "method": "thread/loaded/list", "id": 21 }
{ "id": 21, "result": {
"data": ["thr_123", "thr_456"]
} }
Example: Track thread status changes
thread/status/changed is emitted whenever a loaded thread's status changes after it has already been introduced to the client:
- Includes
threadIdand the newstatus. - Status can be
notLoaded,idle,systemError, oractive(withactiveFlags;activeimplies running). thread/start,thread/fork, and detached review threads do not emit a separate initialthread/status/changed; theirthread/startednotification already carries the currentthread.status.
{ "method": "thread/status/changed", "params": {
"threadId": "thr_123",
"status": { "type": "active", "activeFlags": [] }
} }
Example: Unsubscribe from a loaded thread
thread/unsubscribe removes the current connection's subscription to a thread. The response status is one of:
unsubscribedwhen the connection was subscribed and is now removed.notSubscribedwhen the connection was not subscribed to that thread.notLoadedwhen the thread is not loaded.
If this was the last subscriber, the server unloads the thread and emits thread/closed and a thread/status/changed transition to notLoaded.
{ "method": "thread/unsubscribe", "id": 22, "params": { "threadId": "thr_123" } }
{ "id": 22, "result": { "status": "unsubscribed" } }
{ "method": "thread/status/changed", "params": {
"threadId": "thr_123",
"status": { "type": "notLoaded" }
} }
{ "method": "thread/closed", "params": { "threadId": "thr_123" } }
Example: Read a thread
Use thread/read to fetch a stored thread by id without resuming it. Pass includeTurns when you want the rollout history loaded into thread.turns. The returned thread includes agentNickname and agentRole for AgentControl-spawned thread sub-agents when available.
{ "method": "thread/read", "id": 22, "params": { "threadId": "thr_123" } }
{ "id": 22, "result": {
"thread": { "id": "thr_123", "status": { "type": "notLoaded" }, "turns": [] }
} }
{ "method": "thread/read", "id": 23, "params": { "threadId": "thr_123", "includeTurns": true } }
{ "id": 23, "result": {
"thread": { "id": "thr_123", "status": { "type": "notLoaded" }, "turns": [ ... ] }
} }
Example: Update stored thread metadata
Use thread/metadata/update to patch sqlite-backed metadata for a thread without resuming it. Today this supports persisted gitInfo; omitted fields are left unchanged, while explicit null clears a stored value.
{ "method": "thread/metadata/update", "id": 24, "params": {
"threadId": "thr_123",
"gitInfo": { "branch": "feature/sidebar-pr" }
} }
{ "id": 24, "result": {
"thread": {
"id": "thr_123",
"gitInfo": { "sha": null, "branch": "feature/sidebar-pr", "originUrl": null }
}
} }
{ "method": "thread/metadata/update", "id": 25, "params": {
"threadId": "thr_123",
"gitInfo": { "branch": null }
} }
{ "id": 25, "result": {
"thread": {
"id": "thr_123",
"gitInfo": null
}
} }
Example: Archive a thread
Use thread/archive to move the persisted rollout (stored as a JSONL file on disk) into the archived sessions directory.
{ "method": "thread/archive", "id": 21, "params": { "threadId": "thr_b" } }
{ "id": 21, "result": {} }
{ "method": "thread/archived", "params": { "threadId": "thr_b" } }
An archived thread will not appear in thread/list unless archived is set to true.
Example: Unarchive a thread
Use thread/unarchive to move an archived rollout back into the sessions directory.
{ "method": "thread/unarchive", "id": 24, "params": { "threadId": "thr_b" } }
{ "id": 24, "result": { "thread": { "id": "thr_b" } } }
{ "method": "thread/unarchived", "params": { "threadId": "thr_b" } }
Example: Trigger thread compaction
Use thread/compact/start to trigger manual history compaction for a thread. The request returns immediately with {}.
Progress is emitted as standard turn/* and item/* notifications on the same threadId. Clients should expect a single compaction item:
item/startedwithitem: { "type": "contextCompaction", ... }item/completedwith the samecontextCompactionitem id
While compaction is running, the thread is effectively in a turn so clients should surface progress UI based on the notifications.
{ "method": "thread/compact/start", "id": 25, "params": { "threadId": "thr_b" } }
{ "id": 25, "result": {} }
Example: Start a turn (send user input)
Turns attach user input (text or images) to a thread and trigger Codex generation. The input field is a list of discriminated unions:
{"type":"text","text":"Explain this diff"}{"type":"image","url":"https://…png"}{"type":"localImage","path":"/tmp/screenshot.png"}
You can optionally specify config overrides on the new turn. If specified, these settings become the default for subsequent turns on the same thread. outputSchema applies only to the current turn.
{ "method": "turn/start", "id": 30, "params": {
"threadId": "thr_123",
"input": [ { "type": "text", "text": "Run tests" } ],
// Below are optional config overrides
"cwd": "/Users/me/project",
"approvalPolicy": "unlessTrusted",
"sandboxPolicy": {
"type": "workspaceWrite",
"writableRoots": ["/Users/me/project"],
"networkAccess": true
},
"model": "gpt-5.1-codex",
"effort": "medium",
"summary": "concise",
"personality": "friendly",
// Optional JSON Schema to constrain the final assistant message for this turn.
"outputSchema": {
"type": "object",
"properties": { "answer": { "type": "string" } },
"required": ["answer"],
"additionalProperties": false
}
} }
{ "id": 30, "result": { "turn": {
"id": "turn_456",
"status": "inProgress",
"items": [],
"error": null
} } }
Example: Start a turn (invoke a skill)
Invoke a skill explicitly by including $<skill-name> in the text input and adding a skill input item alongside it.
{ "method": "turn/start", "id": 33, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage." },
{ "type": "skill", "name": "skill-creator", "path": "/Users/me/.codex/skills/skill-creator/SKILL.md" }
]
} }
{ "id": 33, "result": { "turn": {
"id": "turn_457",
"status": "inProgress",
"items": [],
"error": null
} } }
Example: Start a turn (invoke an app)
Invoke an app by including $<app-slug> in the text input and adding a mention input item with the app id in app://<connector-id> form.
{ "method": "turn/start", "id": 34, "params": {
"threadId": "thr_123",
"input": [
{ "type": "text", "text": "$demo-app Summarize the latest updates." },
{ "type": "mention", "name": "Demo App", "path": "app://demo-app" }
]
} }
{ "id": 34, "result": { "turn": {
"id": "turn_458",
"status": "inProgress",
"items": [],
"error": null
} } }
Example: Interrupt an active turn
You can cancel a running Turn with turn/interrupt.
{ "method": "turn/interrupt", "id": 31, "params": {
"threadId": "thr_123",
"turnId": "turn_456"
} }
{ "id": 31, "result": {} }
The server requests cancellations for running subprocesses, then emits a turn/completed event with status: "interrupted". Rely on the turn/completed to know when Codex-side cleanup is done.
Example: Clean background terminals
Use thread/backgroundTerminals/clean to terminate all running background terminals associated with a thread. This method is experimental and requires capabilities.experimentalApi = true.
{ "method": "thread/backgroundTerminals/clean", "id": 35, "params": {
"threadId": "thr_123"
} }
{ "id": 35, "result": {} }
Example: Steer an active turn
Use turn/steer to append additional user input to the currently active turn. This does not emit
turn/started and does not accept turn context overrides.
{ "method": "turn/steer", "id": 32, "params": {
"threadId": "thr_123",
"input": [ { "type": "text", "text": "Actually focus on failing tests first." } ],
"expectedTurnId": "turn_456"
} }
{ "id": 32, "result": { "turnId": "turn_456" } }
expectedTurnId is required. If there is no active turn (or expectedTurnId does not match the active turn), the request fails with an invalid request error.
Example: Request a code review
Use review/start to run Codex’s reviewer on the currently checked-out project. The request takes the thread id plus a target describing what should be reviewed:
{"type":"uncommittedChanges"}— staged, unstaged, and untracked files.{"type":"baseBranch","branch":"main"}— diff against the provided branch’s upstream (see prompt for the exactgit merge-base/git diffinstructions Codex will run).{"type":"commit","sha":"abc1234","title":"Optional subject"}— review a specific commit.{"type":"custom","instructions":"Free-form reviewer instructions"}— fallback prompt equivalent to the legacy manual review request.delivery("inline"or"detached", default"inline") — where the review runs:"inline": run the review as a new turn on the existing thread. The response’sreviewThreadIdequals the originalthreadId, and no newthread/startednotification is emitted."detached": fork a new review thread from the parent conversation and run the review there. The response’sreviewThreadIdis the id of this new review thread, and the server emits athread/startednotification for it before streaming review items.
Example request/response:
{ "method": "review/start", "id": 40, "params": {
"threadId": "thr_123",
"delivery": "inline",
"target": { "type": "commit", "sha": "1234567deadbeef", "title": "Polish tui colors" }
} }
{ "id": 40, "result": {
"turn": {
"id": "turn_900",
"status": "inProgress",
"items": [
{ "type": "userMessage", "id": "turn_900", "content": [ { "type": "text", "text": "Review commit 1234567: Polish tui colors" } ] }
],
"error": null
},
"reviewThreadId": "thr_123"
} }
For a detached review, use "delivery": "detached". The response is the same shape, but reviewThreadId will be the id of the new review thread (different from the original threadId). The server also emits a thread/started notification for that new thread before streaming the review turn.
Codex streams the usual turn/started notification followed by an item/started
with an enteredReviewMode item so clients can show progress:
{
"method": "item/started",
"params": {
"item": {
"type": "enteredReviewMode",
"id": "turn_900",
"review": "current changes"
}
}
}
When the reviewer finishes, the server emits item/started and item/completed
containing an exitedReviewMode item with the final review text:
{
"method": "item/completed",
"params": {
"item": {
"type": "exitedReviewMode",
"id": "turn_900",
"review": "Looks solid overall...\n\n- Prefer Stylize helpers — app.rs:10-20\n ..."
}
}
}
The review string is plain text that already bundles the overall explanation plus a bullet list for each structured finding (matching ThreadItem::ExitedReviewMode in the generated schema). Use this notification to render the reviewer output in your client.
Example: One-off command execution
Run a standalone command (argv vector) in the server’s sandbox without creating a thread or turn:
{ "method": "command/exec", "id": 32, "params": {
"command": ["ls", "-la"],
"cwd": "/Users/me/project", // optional; defaults to server cwd
"sandboxPolicy": { "type": "workspaceWrite" }, // optional; defaults to user config
"timeoutMs": 10000 // optional; ms timeout; defaults to server timeout
} }
{ "id": 32, "result": { "exitCode": 0, "stdout": "...", "stderr": "" } }
- For clients that are already sandboxed externally, set
sandboxPolicyto{"type":"externalSandbox","networkAccess":"enabled"}(or omitnetworkAccessto keep it restricted). Codex will not enforce its own sandbox in this mode; it tells the model it has full file-system access and passes thenetworkAccessstate throughenvironment_context.
Notes:
- Empty
commandarrays are rejected. sandboxPolicyaccepts the same shape used byturn/start(e.g.,dangerFullAccess,readOnly,workspaceWritewith flags,externalSandboxwithnetworkAccessrestricted|enabled).- When omitted,
timeoutMsfalls back to the server default.
Events
Event notifications are the server-initiated event stream for thread lifecycles, turn lifecycles, and the items within them. After you start or resume a thread, keep reading stdout for thread/started, thread/archived, thread/unarchived, thread/closed, turn/*, and item/* notifications.
Thread realtime uses a separate thread-scoped notification surface. thread/realtime/* notifications are ephemeral transport events, not ThreadItems, and are not returned by thread/read, thread/resume, or thread/fork.
Notification opt-out
Clients can suppress specific notifications per connection by sending exact method names in initialize.params.capabilities.optOutNotificationMethods.
- Exact-match only:
item/agentMessage/deltasuppresses only that method. - Unknown method names are ignored.
- Applies to both legacy (
codex/event/*) and v2 (thread/*,turn/*,item/*, etc.) notifications. - Does not apply to requests/responses/errors.
Examples:
- Opt out of legacy session setup event:
codex/event/session_configured - Opt out of streamed agent text deltas:
item/agentMessage/delta
Fuzzy file search events (experimental)
The fuzzy file search session API emits per-query notifications:
fuzzyFileSearch/sessionUpdated—{ sessionId, query, files }with the current matching files for the active query.fuzzyFileSearch/sessionCompleted—{ sessionId, query }once indexing/matching for that query has completed.
Thread realtime events (experimental)
The thread realtime API emits thread-scoped notifications for session lifecycle and streaming media:
thread/realtime/started—{ threadId, sessionId }once realtime starts for the thread (experimental).thread/realtime/itemAdded—{ threadId, item }for non-audio realtime items (experimental).itemis forwarded as raw JSON while the upstream websocket item schema remains unstable.thread/realtime/outputAudio/delta—{ threadId, audio }for streamed output audio chunks (experimental).audiouses camelCase fields (data,sampleRate,numChannels,samplesPerChannel).thread/realtime/error—{ threadId, message }when realtime encounters a transport or backend error (experimental).thread/realtime/closed—{ threadId, reason }when the realtime transport closes (experimental).
Because audio is intentionally separate from ThreadItem, clients can opt out of thread/realtime/outputAudio/delta independently with optOutNotificationMethods.
Windows sandbox setup events
windowsSandbox/setupCompleted—{ mode, success, error }after awindowsSandbox/setupStartrequest finishes.
Turn events
The app-server streams JSON-RPC notifications while a turn is running. Each turn emits turn/started when it begins running and ends with turn/completed (final turn status). Token usage events stream separately via thread/tokenUsage/updated. Clients subscribe to the events they care about, rendering each item incrementally as updates arrive. The per-item lifecycle is always: item/started → zero or more item-specific deltas → item/completed.
turn/started—{ turn }with the turn id, emptyitems, andstatus: "inProgress".turn/completed—{ turn }whereturn.statusiscompleted,interrupted, orfailed; failures carry{ error: { message, codexErrorInfo?, additionalDetails? } }.turn/diff/updated—{ threadId, turnId, diff }represents the up-to-date snapshot of the turn-level unified diff, emitted after every FileChange item.diffis the latest aggregated unified diff across every file change in the turn. UIs can render this to show the full "what changed" view without stitching individualfileChangeitems.turn/plan/updated—{ turnId, explanation?, plan }whenever the agent shares or changes its plan; eachplanentry is{ step, status }withstatusinpending,inProgress, orcompleted.model/rerouted—{ threadId, turnId, fromModel, toModel, reason }when the backend reroutes a request to a different model (for example, due to high-risk cyber safety checks).
Today both notifications carry an empty items array even when item events were streamed; rely on item/* notifications for the canonical item list until this is fixed.
Items
ThreadItem is the tagged union carried in turn responses and item/* notifications. Currently we support events for the following items:
userMessage—{id, content}wherecontentis a list of user inputs (text,image, orlocalImage).agentMessage—{id, text}containing the accumulated agent reply.plan—{id, text}emitted for plan-mode turns; plan text can stream viaitem/plan/delta(experimental).reasoning—{id, summary, content}wheresummaryholds streamed reasoning summaries (applicable for most OpenAI models) andcontentholds raw reasoning blocks (applicable for e.g. open source models).commandExecution—{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}for sandboxed commands;statusisinProgress,completed,failed, ordeclined.fileChange—{id, changes, status}describing proposed edits;changeslist{path, kind, diff}andstatusisinProgress,completed,failed, ordeclined.mcpToolCall—{id, server, tool, status, arguments, result?, error?}describing MCP calls;statusisinProgress,completed, orfailed.collabToolCall—{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}describing collab tool calls (spawn_agent,send_input,resume_agent,wait,close_agent);statusisinProgress,completed, orfailed.webSearch—{id, query, action?}for a web search request issued by the agent;actionmirrors the Responses API web_search action payload (search,open_page,find_in_page) and may be omitted until completion.imageView—{id, path}emitted when the agent invokes the image viewer tool.enteredReviewMode—{id, review}sent when the reviewer starts;reviewis a short user-facing label such as"current changes"or the requested target description.exitedReviewMode—{id, review}emitted when the reviewer finishes;reviewis the full plain-text review (usually, overall notes plus bullet point findings).contextCompaction—{id}emitted when codex compacts the conversation history. This can happen automatically.compacted-{threadId, turnId}when codex compacts the conversation history. This can happen automatically. Deprecated: UsecontextCompactioninstead.
All items emit two shared lifecycle events:
item/started— emits the fullitemwhen a new unit of work begins so the UI can render it immediately; theitem.idin this payload matches theitemIdused by deltas.item/completed— sends the finalitemonce that work finishes (e.g., after a tool call or message completes); treat this as the authoritative state.
There are additional item-specific events:
agentMessage
item/agentMessage/delta— appends streamed text for the agent message; concatenatedeltavalues for the sameitemIdin order to reconstruct the full reply.
plan
item/plan/delta— streams proposed plan content for plan items (experimental); concatenatedeltavalues for the same planitemId. These deltas correspond to the<proposed_plan>block.
reasoning
item/reasoning/summaryTextDelta— streams readable reasoning summaries;summaryIndexincrements when a new summary section opens.item/reasoning/summaryPartAdded— marks the boundary between reasoning summary sections for anitemId; subsequentsummaryTextDeltaentries share the samesummaryIndex.item/reasoning/textDelta— streams raw reasoning text (only applicable for e.g. open source models); usecontentIndexto group deltas that belong together before showing them in the UI.
commandExecution
item/commandExecution/outputDelta— streams stdout/stderr for the command; append deltas in order to render live output alongsideaggregatedOutputin the final item. FinalcommandExecutionitems include parsedcommandActions,status,exitCode, anddurationMsso the UI can summarize what ran and whether it succeeded.
fileChange
item/fileChange/outputDelta- contains the tool call response of the underlyingapply_patchtool call.
Errors
error event is emitted whenever the server hits an error mid-turn (for example, upstream model errors or quota limits). Carries the same { error: { message, codexErrorInfo?, additionalDetails? } } payload as turn.status: "failed" and may precede that terminal notification.
codexErrorInfo maps to the CodexErrorInfo enum. Common values:
ContextWindowExceededUsageLimitExceededHttpConnectionFailed { httpStatusCode? }: upstream HTTP failures including 4xx/5xxResponseStreamConnectionFailed { httpStatusCode? }: failure to connect to the response SSE streamResponseStreamDisconnected { httpStatusCode? }: disconnect of the response SSE stream in the middle of a turn before completionResponseTooManyFailedAttempts { httpStatusCode? }BadRequestUnauthorizedSandboxErrorInternalServerErrorOther: all unclassified errors
When an upstream HTTP status is available (for example, from the Responses API or a provider), it is forwarded in httpStatusCode on the relevant codexErrorInfo variant.
Approvals
Certain actions (shell commands or modifying files) may require explicit user approval depending on the user's config. When turn/start is used, the app-server drives an approval flow by sending a server-initiated JSON-RPC request to the client. The client must respond to tell Codex whether to proceed. UIs should present these requests inline with the active turn so users can review the proposed command or diff before choosing.
- Requests include
threadIdandturnId—use them to scope UI state to the active conversation. - Respond with a single
{ "decision": ... }payload. Command approvals supportaccept,acceptForSession,acceptWithExecpolicyAmendment,applyNetworkPolicyAmendment,decline, orcancel. The server resumes or declines the work and ends the item withitem/completed.
Command execution approvals
Order of messages:
item/started— shows the pendingcommandExecutionitem withcommand,cwd, and other fields so you can render the proposed action.item/commandExecution/requestApproval(request) — carries the sameitemId,threadId,turnId, optionallyapprovalId(for subcommand callbacks), andreason. For normal command approvals, it also includescommand,cwd, andcommandActionsfor friendly display. Wheninitialize.params.capabilities.experimentalApi = true, it may also include experimentaladditionalPermissionsdescribing requested per-command sandbox access; any filesystem paths in that payload are absolute on the wire. For network-only approvals, those command fields may be omitted andnetworkApprovalContextis provided instead. Optional persistence hints may also be included viaproposedExecpolicyAmendmentandproposedNetworkPolicyAmendments. Clients can preferavailableDecisionswhen present to render the exact set of choices the server wants to expose, while still falling back to the older heuristics if it is omitted.- Client response — for example
{ "decision": "accept" },{ "decision": "acceptForSession" },{ "decision": { "acceptWithExecpolicyAmendment": { "execpolicy_amendment": [...] } } },{ "decision": { "applyNetworkPolicyAmendment": { "network_policy_amendment": { "host": "example.com", "action": "allow" } } } },{ "decision": "decline" }, or{ "decision": "cancel" }. serverRequest/resolved—{ threadId, requestId }confirms the pending request has been resolved or cleared, including lifecycle cleanup on turn start/complete/interrupt.item/completed— finalcommandExecutionitem withstatus: "completed" | "failed" | "declined"and execution output. Render this as the authoritative result.
File change approvals
Order of messages:
item/started— emits afileChangeitem withchanges(diff chunk summaries) andstatus: "inProgress". Show the proposed edits and paths to the user.item/fileChange/requestApproval(request) — includesitemId,threadId,turnId, and an optionalreason.- Client response —
{ "decision": "accept" }or{ "decision": "decline" }. serverRequest/resolved—{ threadId, requestId }confirms the pending request has been resolved or cleared, including lifecycle cleanup on turn start/complete/interrupt.item/completed— returns the samefileChangeitem withstatusupdated tocompleted,failed, ordeclinedafter the patch attempt. Rely on this to show success/failure and finalize the diff state in your UI.
UI guidance for IDEs: surface an approval dialog as soon as the request arrives. The turn will proceed after the server receives a response to the approval request. The terminal item/completed notification will be sent with the appropriate status.
request_user_input
When the client responds to item/tool/requestUserInput, the server emits serverRequest/resolved with { threadId, requestId }. If the pending request is cleared by turn start, turn completion, or turn interruption before the client answers, the server emits the same notification for that cleanup.
Dynamic tool calls (experimental)
dynamicTools on thread/start and the corresponding item/tool/call request/response flow are experimental APIs. To enable them, set initialize.params.capabilities.experimentalApi = true.
When a dynamic tool is invoked during a turn, the server sends an item/tool/call JSON-RPC request to the client:
{
"method": "item/tool/call",
"id": 60,
"params": {
"threadId": "thr_123",
"turnId": "turn_123",
"callId": "call_123",
"tool": "lookup_ticket",
"arguments": { "id": "ABC-123" }
}
}
The server also emits item lifecycle notifications around the request:
item/startedwithitem.type = "dynamicToolCall",status = "inProgress", plustoolandarguments.item/tool/callrequest.- Client response.
item/completedwithitem.type = "dynamicToolCall", finalstatus, and the returnedcontentItems/success.
The client must respond with content items. Use inputText for text and inputImage for image URLs/data URLs:
{
"id": 60,
"result": {
"contentItems": [
{ "type": "inputText", "text": "Ticket ABC-123 is open." },
{ "type": "inputImage", "imageUrl": "data:image/png;base64,AAA" }
],
"success": true
}
}
Skills
Invoke a skill by including $<skill-name> in the text input. Add a skill input item (recommended) so the backend injects full skill instructions instead of relying on the model to resolve the name.
{
"method": "turn/start",
"id": 101,
"params": {
"threadId": "thread-1",
"input": [
{
"type": "text",
"text": "$skill-creator Add a new skill for triaging flaky CI."
},
{
"type": "skill",
"name": "skill-creator",
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md"
}
]
}
}
If you omit the skill item, the model will still parse the $<skill-name> marker and try to locate the skill, which can add latency.
Example:
$skill-creator Add a new skill for triaging flaky CI and include step-by-step usage.
Use skills/list to fetch the available skills (optionally scoped by cwds, with forceReload).
You can also add perCwdExtraUserRoots to scan additional absolute paths as user scope for specific cwd entries.
Entries whose cwd is not present in cwds are ignored.
skills/list might reuse a cached skills result per cwd; setting forceReload to true refreshes the result from disk.
The server also emits skills/changed notifications when watched local skill files change. Treat this as an invalidation signal and re-run skills/list with your current params when needed.
{ "method": "skills/list", "id": 25, "params": {
"cwds": ["/Users/me/project", "/Users/me/other-project"],
"forceReload": true,
"perCwdExtraUserRoots": [
{
"cwd": "/Users/me/project",
"extraUserRoots": ["/Users/me/shared-skills"]
}
]
} }
{ "id": 25, "result": {
"data": [{
"cwd": "/Users/me/project",
"skills": [
{
"name": "skill-creator",
"description": "Create or update a Codex skill",
"enabled": true,
"interface": {
"displayName": "Skill Creator",
"shortDescription": "Create or update a Codex skill",
"iconSmall": "icon.svg",
"iconLarge": "icon-large.svg",
"brandColor": "#111111",
"defaultPrompt": "Add a new skill for triaging flaky CI."
}
}
],
"errors": []
}]
} }
{
"method": "skills/changed",
"params": {}
}
To enable or disable a skill by path:
{
"method": "skills/config/write",
"id": 26,
"params": {
"path": "/Users/me/.codex/skills/skill-creator/SKILL.md",
"enabled": false
}
}
Apps
Use app/list to fetch available apps (connectors). Each entry includes metadata like the app id, display name, installUrl, branding, appMetadata, labels, whether it is currently accessible, and whether it is enabled in config.
{ "method": "app/list", "id": 50, "params": {
"cursor": null,
"limit": 50,
"threadId": "thr_123",
"forceRefetch": false
} }
{ "id": 50, "result": {
"data": [
{
"id": "demo-app",
"name": "Demo App",
"description": "Example connector for documentation.",
"logoUrl": "https://example.com/demo-app.png",
"logoUrlDark": null,
"distributionChannel": null,
"branding": null,
"appMetadata": null,
"labels": null,
"installUrl": "https://chatgpt.com/apps/demo-app/demo-app",
"isAccessible": true,
"isEnabled": true
}
],
"nextCursor": null
} }
When threadId is provided, app feature gating (Feature::Apps) is evaluated using that thread's config snapshot. When omitted, the latest global config is used.
app/list returns after both accessible apps and directory apps are loaded. Set forceRefetch: true to bypass app caches and fetch fresh data from sources. Cache entries are only replaced when those refetches succeed.
The server also emits app/list/updated notifications whenever either source (accessible apps or directory apps) finishes loading. Each notification includes the latest merged app list.
{
"method": "app/list/updated",
"params": {
"data": [
{
"id": "demo-app",
"name": "Demo App",
"description": "Example connector for documentation.",
"logoUrl": "https://example.com/demo-app.png",
"logoUrlDark": null,
"distributionChannel": null,
"branding": null,
"appMetadata": null,
"labels": null,
"installUrl": "https://chatgpt.com/apps/demo-app/demo-app",
"isAccessible": true,
"isEnabled": true
}
]
}
}
Invoke an app by inserting $<app-slug> in the text input. The slug is derived from the app name and lowercased with non-alphanumeric characters replaced by - (for example, "Demo App" becomes $demo-app). Add a mention input item (recommended) so the server uses the exact app://<connector-id> path rather than guessing by name.
Example:
$demo-app Pull the latest updates from the team.
{
"method": "turn/start",
"id": 51,
"params": {
"threadId": "thread-1",
"input": [
{
"type": "text",
"text": "$demo-app Pull the latest updates from the team."
},
{ "type": "mention", "name": "Demo App", "path": "app://demo-app" }
]
}
}
Auth endpoints
The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no id). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.
Authentication modes
Codex supports these authentication modes. The current mode is surfaced in account/updated (authMode), which also includes the current ChatGPT planType when available, and can be inferred from account/read.
- API key (
apiKey): Caller supplies an OpenAI API key viaaccount/login/startwithtype: "apiKey". The API key is saved and used for API requests. - ChatGPT managed (
chatgpt) (recommended): Codex owns the ChatGPT OAuth flow and refresh tokens. Start viaaccount/login/startwithtype: "chatgpt"; Codex persists tokens to disk and refreshes them automatically.
API Overview
account/read— fetch current account info; optionally refresh tokens.account/login/start— begin login (apiKey,chatgpt).account/login/completed(notify) — emitted when a login attempt finishes (success or error).account/login/cancel— cancel a pending ChatGPT login byloginId.account/logout— sign out; triggersaccount/updated.account/updated(notify) — emitted whenever auth mode changes (authMode:apikey,chatgpt, ornull) and includes the current ChatGPTplanTypewhen available.account/rateLimits/read— fetch ChatGPT rate limits; updates arrive viaaccount/rateLimits/updated(notify).account/rateLimits/updated(notify) — emitted whenever a user's ChatGPT rate limits change.mcpServer/oauthLogin/completed(notify) — emitted after amcpServer/oauth/loginflow finishes for a server; payload includes{ name, success, error? }.
1) Check auth state
Request:
{ "method": "account/read", "id": 1, "params": { "refreshToken": false } }
Response examples:
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": false } } // No OpenAI auth needed (e.g., OSS/local models)
{ "id": 1, "result": { "account": null, "requiresOpenaiAuth": true } } // OpenAI auth required (typical for OpenAI-hosted models)
{ "id": 1, "result": { "account": { "type": "apiKey" }, "requiresOpenaiAuth": true } }
{ "id": 1, "result": { "account": { "type": "chatgpt", "email": "user@example.com", "planType": "pro" }, "requiresOpenaiAuth": true } }
Field notes:
refreshToken(bool): settrueto force a token refresh.requiresOpenaiAuthreflects the active provider; whenfalse, Codex can run without OpenAI credentials.
2) Log in with an API key
- Send:
{ "method": "account/login/start", "id": 2, "params": { "type": "apiKey", "apiKey": "sk-…" } } - Expect:
{ "id": 2, "result": { "type": "apiKey" } } - Notifications:
{ "method": "account/login/completed", "params": { "loginId": null, "success": true, "error": null } } { "method": "account/updated", "params": { "authMode": "apikey", "planType": null } }
3) Log in with ChatGPT (browser flow)
- Start:
{ "method": "account/login/start", "id": 3, "params": { "type": "chatgpt" } } { "id": 3, "result": { "type": "chatgpt", "loginId": "<uuid>", "authUrl": "https://chatgpt.com/…&redirect_uri=http%3A%2F%2Flocalhost%3A<port>%2Fauth%2Fcallback" } } - Open
authUrlin a browser; the app-server hosts the local callback. - Wait for notifications:
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": true, "error": null } } { "method": "account/updated", "params": { "authMode": "chatgpt", "planType": "plus" } }
4) Cancel a ChatGPT login
{ "method": "account/login/cancel", "id": 4, "params": { "loginId": "<uuid>" } }
{ "method": "account/login/completed", "params": { "loginId": "<uuid>", "success": false, "error": "…" } }
5) Logout
{ "method": "account/logout", "id": 5 }
{ "id": 5, "result": {} }
{ "method": "account/updated", "params": { "authMode": null, "planType": null } }
6) Rate limits (ChatGPT)
{ "method": "account/rateLimits/read", "id": 6 }
{ "id": 6, "result": { "rateLimits": { "primary": { "usedPercent": 25, "windowDurationMins": 15, "resetsAt": 1730947200 }, "secondary": null } } }
{ "method": "account/rateLimits/updated", "params": { "rateLimits": { … } } }
Field notes:
usedPercentis current usage within the OpenAI quota window.windowDurationMinsis the quota window length.resetsAtis a Unix timestamp (seconds) for the next reset.
Experimental API Opt-in
Some app-server methods and fields are intentionally gated behind an experimental capability with no backwards-compatible guarantees. This lets clients choose between:
- Stable surface only (default): no opt-in, no experimental methods/fields exposed.
- Experimental surface: opt in during
initialize.
Generating stable vs experimental client schemas
codex app-server schema generation defaults to the stable API surface (experimental fields and methods filtered out). Pass --experimental to include experimental methods/fields in generated TypeScript or JSON schema:
# Stable-only output (default)
codex app-server generate-ts --out DIR
codex app-server generate-json-schema --out DIR
# Include experimental API surface
codex app-server generate-ts --out DIR --experimental
codex app-server generate-json-schema --out DIR --experimental
How clients opt in at runtime
Set capabilities.experimentalApi to true in your single initialize request:
{
"method": "initialize",
"id": 1,
"params": {
"clientInfo": {
"name": "my_client",
"title": "My Client",
"version": "0.1.0"
},
"capabilities": {
"experimentalApi": true
}
}
}
Then send the standard initialized notification and proceed normally.
Notes:
- If
capabilitiesis omitted,experimentalApiis treated asfalse. - This setting is negotiated once at initialization time for the process lifetime (re-initializing is rejected with
"Already initialized").
What happens without opt-in
If a request uses an experimental method or sets an experimental field without opting in, app-server rejects it with a JSON-RPC error. The message is:
<descriptor> requires experimentalApi capability
Examples of descriptor strings:
mock/experimentalMethod(method-level gate)thread/start.mockExperimentalField(field-level gate)
For maintainers: Adding experimental fields and methods
Use this checklist when introducing a field/method that should only be available when the client opts into experimental APIs.
At runtime, clients must send initialize with capabilities.experimentalApi = true to use experimental methods or fields.
-
Annotate the field in the protocol type (usually
app-server-protocol/src/protocol/v2.rs) with:#[experimental("thread/start.myField")] pub my_field: Option<String>, -
Ensure the params type derives
ExperimentalApiso field-level gating can be detected at runtime. -
In
app-server-protocol/src/protocol/common.rs, keep the method stable and useinspect_params: truewhen only some fields are experimental (likethread/start). If the entire method is experimental, annotate the method variant with#[experimental("method/name")].
For server-initiated request payloads, annotate the field the same way so schema generation treats it as experimental, and make sure app-server omits that field when the client did not opt into experimentalApi.
-
Regenerate protocol fixtures:
just write-app-server-schema # Include experimental API fields/methods in fixtures. just write-app-server-schema --experimental -
Verify the protocol crate:
cargo test -p codex-app-server-protocol