- Local-shell tool responses were always tagged as
`ExecCommandSource::UserShell` because handler would call
`run_exec_like` with `is_user_shell_cmd` set to true.
- Treat `ToolPayload::LocalShell` the same as other model generated
shell tool calls by deleting `is_user_shell_cmd` from `run_exec_like`
(since actual user shell commands follow a separate code path)
## Summary
Enables shell_command for windows users, and starts adding some basic
command parsing here, to at least remove powershell prefixes. We'll
follow this up with command parsing but I wanted to land this change
separately with some basic UX.
**NOTE**: This implementation parses bash and powershell on both
platforms. In theory this is possible, since you can use git bash on
windows or powershell on linux. In practice, this may not be worth the
complexity of supporting, so I don't feel strongly about the current
approach vs. platform-specific branching.
## Testing
- [x] Added a bunch of tests
- [x] Ran on both windows and os x
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults
## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
This PR adds the API V2 version of the command‑execution approval flow
for the shell tool.
This PR wires the new RPC (`item/commandExecution/requestApproval`, V2
only) and related events (`item/started`, `item/completed`, and
`item/commandExecution/delta`, which are emitted in both V1 and V2)
through the app-server
protocol. The new approval RPC is only sent when the user initiates a
turn with the new `turn/start` API so we don't break backwards
compatibility with VSCE.
The approach I took was to make as few changes to the Codex core as
possible, leveraging existing `EventMsg` core events, and translating
those in app-server. I did have to add additional fields to
`EventMsg::ExecCommandEndEvent` to capture the command's input so that
app-server can statelessly transform these events to a
`ThreadItem::CommandExecution` item for the `item/completed` event.
Once we stabilize the API and it's complete enough for our partners, we
can work on migrating the core to be aware of command execution items as
a first-class concept.
**Note**: We'll need followup work to make sure these APIs work for the
unified exec tool, but will wait til that's stable and landed before
doing a pass on app-server.
Example payloads below:
```
{
"method": "item/started",
"params": {
"item": {
"aggregatedOutput": null,
"command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
"cwd": "/Users/owen/repos/codex/codex-rs",
"durationMs": null,
"exitCode": null,
"id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
"parsedCmd": [
{
"cmd": "touch /tmp/should-trigger-approval",
"type": "unknown"
}
],
"status": "inProgress",
"type": "commandExecution"
}
}
}
```
```
{
"id": 0,
"method": "item/commandExecution/requestApproval",
"params": {
"itemId": "call_lNWWsbXl1e47qNaYjFRs0dyU",
"parsedCmd": [
{
"cmd": "touch /tmp/should-trigger-approval",
"type": "unknown"
}
],
"reason": "Need to create file in /tmp which is outside workspace sandbox",
"risk": null,
"threadId": "019a93e8-0a52-7fe3-9808-b6bc40c0989a",
"turnId": "1"
}
}
```
```
{
"id": 0,
"result": {
"acceptSettings": {
"forSession": false
},
"decision": "accept"
}
}
```
```
{
"params": {
"item": {
"aggregatedOutput": null,
"command": "/bin/zsh -lc 'touch /tmp/should-trigger-approval'",
"cwd": "/Users/owen/repos/codex/codex-rs",
"durationMs": 224,
"exitCode": 0,
"id": "call_lNWWsbXl1e47qNaYjFRs0dyU",
"parsedCmd": [
{
"cmd": "touch /tmp/should-trigger-approval",
"type": "unknown"
}
],
"status": "completed",
"type": "commandExecution"
}
}
}
```
The `cap_sid` file contains the IDs of the two custom SIDs that the
Windows sandbox creates/manages to implement read-only and
workspace-write sandbox policies.
It previously lived in `<cwd>/.codex` which means that the sandbox could
write to it, which could degrade the efficacy of the sandbox. This
change moves it to `~/.codex/` (or wherever `CODEX_HOME` points to) so
that it is outside the workspace.
`--disable shell_tool` disables the built-in shell tool. This is useful
for MCP-only operation.
---------
Co-authored-by: Michael Bolin <mbolin@openai.com>
## Overview
Adds LM Studio OSS support. Closes#1883
### Changes
This PR enhances the behavior of `--oss` flag to support LM Studio as a
provider. Additionally, it introduces a new flag`--local-provider` which
can take in `lmstudio` or `ollama` as values if the user wants to
explicitly choose which one to use.
If no provider is specified `codex --oss` will auto-select the provider
based on whichever is running.
#### Additional enhancements
The default can be set using `oss-provider` in config like:
```
oss_provider = "lmstudio"
```
For non-interactive users, they will need to either provide the provider
as an arg or have it in their `config.toml`
### Notes
For best performance, [set the default context
length](https://lmstudio.ai/docs/app/advanced/per-model) for gpt-oss to
the maximum your machine can support
---------
Co-authored-by: Matt Clayton <matt@lmstudio.ai>
Co-authored-by: Eric Traut <etraut@openai.com>
## Summary
Fixes streaming issue where Claude models return only 1-4 characters
instead of full responses when used through certain API
providers/proxies.
## Environment
- **OS**: Windows
- **Models affected**: Claude models (e.g., claude-haiku-4-5-20251001)
- **API Provider**: AAAI API proxy (https://api.aaai.vip/v1)
- **Working models**: GLM, Google models work correctly
## Problem
When using Claude models in both TUI and exec modes, only 1-4 characters
are displayed despite the backend receiving the full response. Debug
logs revealed that some API providers send SSE chunks with an empty
string finish_reason during active streaming, rather than null or
omitting the field entirely.
The current code treats any non-null finish_reason as a termination
signal, causing the stream to exit prematurely after the first chunk.
The problematic chunks contain finish_reason with an empty string
instead of null.
## Solution
Fix empty finish_reason handling in chat_completions.rs by adding a
check to only process non-empty finish_reason values. This ensures empty
strings are ignored and streaming continues normally.
## Testing
- Tested on Windows with Claude Haiku model via AAAI API proxy
- Full responses now received and displayed correctly in both TUI and
exec modes
- Other models (GLM, Google) continue to work as expected
- No regression in existing functionality
## Impact
- Improves compatibility with API providers that send empty
finish_reason during streaming
- Enables Claude models to work correctly in Windows environment
- No breaking changes to existing functionality
## Related Issues
This fix resolves the issue where Claude models appeared to return
incomplete responses. The root cause was identified as a compatibility
issue in parsing SSE responses from certain API providers/proxies,
rather than a model-specific problem. This change improves overall
robustness when working with various API endpoints.
---------
Co-authored-by: Eric Traut <etraut@openai.com>
This PR does the following:
- Add compact prefix to the summary
- Change the compaction prompt
- Allow multiple compaction for long running tasks
- Filter out summary messages on the following compaction
Considerations:
- Filtering out the summary message isn't the most clean
- Theoretically, we can end up in infinite compaction loop if the user
messages > compaction limit . However, that's not possible in today's
code because we have hard cap on user messages.
- We need to address having multiple user messages because it confuses
the model.
Testing:
- Making sure that after compact we always end up with one user message
(task) and one summary, even on multiple compaction.
Fixes#4940Fixes#4892
When selecting "No, ask me to approve edits and commands" during
onboarding, the code wasn't applying the correct approval policy,
causing Codex to block all write operations instead of requesting
approval.
This PR fixes the issue by persisting the "DontTrust" decision in
config.toml as `trust_level = "untrusted"` and handling it in the
sandbox and approval policy logic, so Codex correctly asks for approval
before making changes.
## Before (bug)
<img width="709" height="500" alt="bef"
src="https://github.com/user-attachments/assets/5aced26d-d810-4754-879a-89d9e4e0073b"
/>
## After (fixed)
<img width="713" height="359" alt="aft"
src="https://github.com/user-attachments/assets/9887bbcb-a9a5-4e54-8e76-9125a782226b"
/>
---------
Co-authored-by: Eric Traut <etraut@openai.com>
For better caching performance all output items should be rendered in
the order they were produced before all new input items (for example,
all function_call before all function_call_output).
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default
## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
core event to app server event mapping:
1. `codex/event/reasoning_content_delta` ->
`item/reasoning/summaryTextDelta`.
2. `codex/event/reasoning_raw_content_delta` ->
`item/reasoning/textDelta`
3. `codex/event/agent_message_content_delta` →
`item/agentMessage/delta`.
4. `codex/event/agent_reasoning_section_break` ->
`item/reasoning/summaryPartAdded`.
Also added a change in core to pass down content index, summary index
and item id from events.
Tested with the `git checkout owen/app_server_test_client && cargo run
-p codex-app-server-test-client -- send-message-v2 "hello"` and verified
that new events are emitted correctly.
We've received many reports of codex hanging when calling certain tools.
[Here](https://github.com/openai/codex/issues/3204) is one example. This
is likely a major cause. The problem occurs when
`consume_truncated_output` waits for `stdout` and `stderr` to be closed
once the child process terminates. This normally works fine, but it
doesn't handle the case where the child has spawned grandchild processes
that inherits `stdout` and `stderr`.
The fix was originally written by @md-oai in [this
PR](https://github.com/openai/codex/pull/1852), which has gone stale.
I've copied the original fix (which looks sound to me) and added an
integration test to prevent future regressions.
- Introducing a screen to inform users of model changes.
- Config name is being passed to be able to reuse this component in the
future for future models
This PR addresses https://github.com/openai/codex/issues/6360. The root
problem is that the TUI was directly loading the `auth.json` file to
access the auth information. It should instead be using the AuthManager,
which records the current auth information. The `auth.json` file can be
overwritten at any time by other instances of the CLI or extension, so
its information can be out of sync with the current instance. The
`/status` command should always report the auth information associated
with the current instance.
An alternative fix for this bug was submitted by @chojs23 in [this
PR](https://github.com/openai/codex/pull/6495). That approach was only a
partial fix.
This adds support for a new variant of the shell tool behind a flag. To
test, run `codex` with `--enable shell_command_tool`, which will
register the tool with Codex under the name `shell_command` that accepts
the following shape:
```python
{
command: str
workdir: str | None,
timeout_ms: int | None,
with_escalated_permissions: bool | None,
justification: str | None,
}
```
This is comparable to the existing tool registered under
`shell`/`container.exec`. The primary difference is that it accepts
`command` as a `str` instead of a `str[]`. The `shell_command` tool
executes by running `execvp(["bash", "-lc", command])`, though the exact
arguments to `execvp(3)` depend on the user's default shell.
The hypothesis is that this will simplify things for the model. For
example, on Windows, instead of generating:
```json
{"command": ["pwsh.exe", "-NoLogo", "-Command", "ls -Name"]}
```
The model could simply generate:
```json
{"command": "ls -Name"}
```
As part of this change, I extracted some logic out of `user_shell.rs` as
`Shell::derive_exec_args()` so that it can be reused in
`codex-rs/core/src/tools/handlers/shell.rs`. Note the original code
generated exec arg lists like:
```javascript
["bash", "-lc", command]
["zsh", "-lc", command]
["pwsh.exe", "-NoProfile", "-Command", command]
```
Using `-l` for Bash and Zsh, but then specifying `-NoProfile` for
PowerShell seemed inconsistent to me, so I changed this in the new
implementation while also adding a `use_login_shell: bool` option to
make this explicit. If we decide to add a `login: bool` to
`ShellCommandToolCallParams` like we have for unified exec:
807e2c27f0/codex-rs/core/src/tools/handlers/unified_exec.rs (L33-L34)
Then this should make it straightforward to support.
Unified exec isn't working on Linux because we don't provide the correct
arg0.
The library we use for pty management doesn't allow setting arg0
separately from executable. Use the same aliasing strategy we use for
`apply_patch` for `codex-linux-sandbox`.
Use `#[ctor]` hack to dispatch codex-linux-sandbox calls.
Addresses https://github.com/openai/codex/issues/6450
## Summary
- add a `hide_rate_limit_model_nudge` notice flag plus config edit
plumbing so the rate limit reminder preference is persisted and
documented
- extend the chat widget prompt with a "never show again" option, and
wire new app events so selecting it hides future nudges immediately and
writes the config
- add unit coverage and refresh the snapshot for the three-option prompt
## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-core`
- `cargo test -p codex-tui`
- `cargo test -p codex-core` *(fails at
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`:
grandchild process still alive)*
------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6910d7f407748321b2661fc355416994)