Compare commits

..

34 Commits

Author SHA1 Message Date
Ahmed Ibrahim
ffc4f74598 d 2025-09-29 10:03:12 -07:00
Ahmed Ibrahim
886fd9a7e9 Merge branch 'main' into worktree-core 2025-09-29 09:50:57 -07:00
Ahmed Ibrahim
d4750bc6fd core/tui: complete worktree integration (services) and update TUI test for new SessionConfiguredEvent field 2025-09-29 09:02:41 -07:00
Ahmed Ibrahim
103265bac6 core: add experimental per-session git worktree mode
Introduce WorktreeHandle to create/reuse/remove linked checkouts under <cwd>/codex/<conversation>; add config flag enable_git_worktree (and trust logic for worktrees); plumb protocol: RemoveWorktree, WorktreeRemoved, SessionConfigured.worktree_path; update exec/TUI integration; docs: experimental config section.

# Conflicts:
#	codex-rs/core/src/codex.rs
#	codex-rs/tui/src/chatwidget.rs
2025-09-29 08:58:37 -07:00
Ed Bayes
c4120a265b [CODEX-3595] Remove period when copying highlighted text in iTerm (#4419) 2025-09-28 20:50:04 -07:00
Michael Bolin
618a42adf5 feat: introduce npm module for codex-responses-api-proxy (#4417)
This PR expands `.github/workflows/rust-release.yml` so that it also
builds and publishes the `npm` module for
`@openai/codex-responses-api-proxy` in addition to `@openai/codex`. Note
both `npm` modules are similar, in that they each contain a single `.js`
file that is a thin launcher around the appropriate native executable.
(Since we have a minimal dependency on Node.js, I also lowered the
minimum version from 20 to 16 and verified that works on my machine.)

As part of this change, we tighten up some of the docs around
`codex-responses-api-proxy` and ensure the details regarding protecting
the `OPENAI_API_KEY` in memory match the implementation.

To test the `npm` build process, I ran:

```
./codex-cli/scripts/build_npm_package.py --package codex-responses-api-proxy --version 0.43.0-alpha.3
```

which stages the `npm` module for `@openai/codex-responses-api-proxy` in
a temp directory, using the binary artifacts from
https://github.com/openai/codex/releases/tag/rust-v0.43.0-alpha.3.
2025-09-28 19:34:06 -07:00
dedrisian-oai
a9d54b9e92 Add /review to main commands (#4416)
<img width="517" height="249" alt="Screenshot 2025-09-28 at 4 33 26 PM"
src="https://github.com/user-attachments/assets/6d34734e-fe3c-4b88-8239-a6436bcb9fa5"
/>
2025-09-28 16:59:39 -07:00
Michael Bolin
79e51dd607 fix: clean up some minor issues with .github/workflows/ci.yml (#4408) 2025-09-28 15:31:17 -07:00
Michael Bolin
ff6dbff0b6 feat: build codex-responses-api-proxy for all platforms as part of the GitHub Release (#4406)
This should make the `codex-responses-api-proxy` binaries available for
all platforms in a GitHub Release as well as a corresponding DotSlash
file.

Making `codex-responses-api-proxy` available as an `npm` module will be
done in a follow-up PR.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/4404).
* __->__ #4406
* #4404
* #4403
2025-09-28 15:25:15 -07:00
Michael Bolin
99841332e2 chore: remove responses-api-proxy from the multitool (#4404)
This removes the `codex responses-api-proxy` subcommand in favor of
running it as a standalone CLI.

As part of this change, we:

- remove the dependency on `tokio`/`async/await` as well as `codex_arg0`
- introduce the use of `pre_main_hardening()` so `CODEX_SECURE_MODE=1`
is not required

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/4404).
* #4406
* __->__ #4404
* #4403
2025-09-28 15:22:27 -07:00
Michael Bolin
7407469791 chore: lower logging level from error to info for MCP startup (#4412) 2025-09-28 15:13:44 -07:00
Michael Bolin
43615becf0 chore: move pre_main_hardening() utility into its own crate (#4403) 2025-09-28 14:35:14 -07:00
dedrisian-oai
9ee6e6f342 Improve update nudge (#4405)
Makes the update nudge larger and adds a link to see latest release:

<img width="542" height="337" alt="Screenshot 2025-09-28 at 11 19 05 AM"
src="https://github.com/user-attachments/assets/1facce96-72f0-4a97-910a-df8b5b8b07af"
/>
2025-09-28 11:46:15 -07:00
Thibault Sottiaux
d7286e9829 chore: remove model upgrade popup (#4332) 2025-09-27 13:25:09 -07:00
Fouad Matin
bcf2bc0aa5 fix(tui): make ? work again (#4362)
Revert #4330 #4316
2025-09-27 12:18:33 -07:00
Michael Bolin
68765214b3 fix: remove default timeout of 30s in the proxy (#4336)
This is likely the reason that I saw some conversations "freeze up" when
using the proxy.

Note the client in `core` does not specify a timeout when making
requests to the Responses API, so the proxy should not, either.
2025-09-27 07:54:32 -07:00
Ahmed Ibrahim
5c67dc3af1 Edit the spacing in shortcuts (#4330)
<img width="739" height="132" alt="image"
src="https://github.com/user-attachments/assets/e8d40abb-ac41-49a2-abc4-ddc6decef989"
/>
2025-09-26 22:54:38 -07:00
Jeremy Rose
c0960c0f49 tui: separator above final agent message (#4324)
Adds a separator line before the final agent message

<img width="1011" height="884" alt="Screenshot 2025-09-26 at 4 55 01 PM"
src="https://github.com/user-attachments/assets/7c91adbf-6035-4578-8b88-a6921f11bcbc"
/>
2025-09-26 22:49:59 -07:00
Thibault Sottiaux
90c3a5650c fix: set gpt-5-codex medium preset reasoning (#4335)
Otherwise it shows up as none, see
https://github.com/openai/codex/issues/4321
2025-09-26 22:31:39 -07:00
Thibault Sottiaux
a3254696c8 docs: refresh README under codex-rs (#4333) 2025-09-26 21:45:46 -07:00
Ahmed Ibrahim
2719fdd12a Add "? for shortcuts" (#4316)
https://github.com/user-attachments/assets/9e61b197-024b-4cbc-b40d-c446b448e759
2025-09-26 18:24:26 -07:00
Gabriel Peal
3a1be084f9 [MCP] Add experimental support for streamable HTTP MCP servers (#4317)
This PR adds support for streamable HTTP MCP servers when the
`experimental_use_rmcp_client` is enabled.

To set one up, simply add a new mcp server config with the url:
```
[mcp_servers.figma]
url = "http://127.0.0.1:3845/mcp"
```

It also supports an optional `bearer_token` which will be provided in an
authorization header. The full oauth flow is not supported yet.

The config parsing will throw if it detects that the user mixed and
matched config fields (like command + bearer token or url + env).

The best way to review it is to review `core/src` and then
`rmcp-client/src/rmcp_client.rs` first. The rest is tests and
propagating the `Transport` struct around the codebase.

Example with the Figma MCP:
<img width="5084" height="1614" alt="CleanShot 2025-09-26 at 13 35 40"
src="https://github.com/user-attachments/assets/eaf2771e-df3e-4300-816b-184d7dec5a28"
/>
2025-09-26 21:24:01 -04:00
Jeremy Rose
43b63ccae8 update composer + user message styling (#4240)
Changes:

- the composer and user messages now have a colored background that
stretches the entire width of the terminal.
- the prompt character was changed from a cyan `▌` to a bold `›`.
- the "working" shimmer now follows the "dark gray" color of the
terminal, better matching the terminal's color scheme

| Terminal + Background        | Screenshot |
|------------------------------|------------|
| iTerm with dark bg | <img width="810" height="641" alt="Screenshot
2025-09-25 at 11 44 52 AM"
src="https://github.com/user-attachments/assets/1317e579-64a9-4785-93e6-98b0258f5d92"
/> |
| iTerm with light bg | <img width="845" height="540" alt="Screenshot
2025-09-25 at 11 46 29 AM"
src="https://github.com/user-attachments/assets/e671d490-c747-4460-af0b-3f8d7f7a6b8e"
/> |
| iTerm with color bg | <img width="825" height="564" alt="Screenshot
2025-09-25 at 11 47 12 AM"
src="https://github.com/user-attachments/assets/141cda1b-1164-41d5-87da-3be11e6a3063"
/> |
| Terminal.app with dark bg | <img width="577" height="367"
alt="Screenshot 2025-09-25 at 11 45 22 AM"
src="https://github.com/user-attachments/assets/93fc4781-99f7-4ee7-9c8e-3db3cd854fe5"
/> |
| Terminal.app with light bg | <img width="577" height="367"
alt="Screenshot 2025-09-25 at 11 46 04 AM"
src="https://github.com/user-attachments/assets/19bf6a3c-91e0-447b-9667-b8033f512219"
/> |
| Terminal.app with color bg | <img width="577" height="367"
alt="Screenshot 2025-09-25 at 11 45 50 AM"
src="https://github.com/user-attachments/assets/dd7c4b5b-342e-4028-8140-f4e65752bd0b"
/> |
2025-09-26 16:35:56 -07:00
pakrym-oai
cc1b21e47f Add turn started/completed events and correct exit code on error (#4309)
Adds new event for session completed that includes usage. Also ensures
we return 1 on failures.
```
{
  "type": "session.created",
  "session_id": "019987a7-93e7-7b20-9e05-e90060e411ea"
}
{
  "type": "turn.started"
}
...
{
  "type": "turn.completed",
  "usage": {
    "input_tokens": 78913,
    "cached_input_tokens": 65280,
    "output_tokens": 1099
  }
}
```
2025-09-26 16:21:50 -07:00
iceweasel-oai
55801700de reject dangerous commands for AskForApproval::Never (#4307)
If we detect a dangerous command but approval_policy is Never, simply
reject the command.
2025-09-26 14:08:28 -07:00
Ahmed Ibrahim
1fba99ed85 /status followup (#4304)
- Render `send a message to load usage data` in the beginning of the
session
- Render `data not available yet` if received no rate limits 
- nit case
- Deleted stall snapshots that were moved to
`codex-rs/tui/src/status/snapshots`
2025-09-26 18:16:54 +00:00
Thibault Sottiaux
d3f6f6629b chore: dead code removal; remove frame count and stateful render helpers (#4310) 2025-09-26 17:52:02 +00:00
Gabriel Peal
e555a36c6a [MCP] Introduce an experimental official rust sdk based mcp client (#4252)
The [official Rust
SDK](57fc428c57)
has come a long way since we first started our mcp client implementation
5 months ago and, today, it is much more complete than our own
stdio-only implementation.

This PR introduces a new config flag `experimental_use_rmcp_client`
which will use a new mcp client powered by the sdk instead of our own.

To keep this PR simple, I've only implemented the same stdio MCP
functionality that we had but will expand on it with future PRs.

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-09-26 13:13:37 -04:00
pakrym-oai
ea095e30c1 Add todo-list tool support (#4255)
Adds a 1-per-turn todo-list item and item.updated event

```jsonl
{"type":"item.started","item":{"id":"item_6","item_type":"todo_list","items":[{"text":"Record initial two-step plan  now","completed":false},{"text":"Update progress to next step","completed":false}]}}
{"type":"item.updated","item":{"id":"item_6","item_type":"todo_list","items":[{"text":"Record initial two-step plan  now","completed":true},{"text":"Update progress to next step","completed":false}]}}
{"type":"item.completed","item":{"id":"item_6","item_type":"todo_list","items":[{"text":"Record initial two-step plan  now","completed":true},{"text":"Update progress to next step","completed":false}]}}
```
2025-09-26 09:35:47 -07:00
Michael Bolin
c549481513 feat: introduce responses-api-proxy (#4246)
Details are in `responses-api-proxy/README.md`, but the key contribution
of this PR is a new subcommand, `codex responses-api-proxy`, which reads
the auth token for use with the OpenAI Responses API from `stdin` at
startup and then proxies `POST` requests to `/v1/responses` over to
`https://api.openai.com/v1/responses`, injecting the auth token as part
of the `Authorization` header.

The expectation is that `codex responses-api-proxy` is launched by a
privileged user who has access to the auth token so that it can be used
by unprivileged users of the Codex CLI on the same host.

If the client only has one user account with `sudo`, one option is to:

- run `sudo codex responses-api-proxy --http-shutdown --server-info
/tmp/server-info.json` to start the server
- record the port written to `/tmp/server-info.json`
- relinquish their `sudo` privileges (which is irreversible!) like so:

```
sudo deluser $USER sudo || sudo gpasswd -d $USER sudo || true
```

- use `codex` with the proxy (see `README.md`)
- when done, make a `GET` request to the server using the `PORT` from
`server-info.json` to shut it down:

```shell
curl --fail --silent --show-error "http://127.0.0.1:$PORT/shutdown"
```

To protect the auth token, we:

- allocate a 1024 byte buffer on the stack and write `"Bearer "` into it
to start
- we then read from `stdin`, copying to the contents into the buffer
after the prefix
- after verifying the input looks good, we create a `String` from that
buffer (so the data is now on the heap)
- we zero out the stack-allocated buffer using
https://crates.io/crates/zeroize so it is not optimized away by the
compiler
- we invoke `.leak()` on the `String` so we can treat its contents as a
`&'static str`, as it will live for the rest of the processs
- on UNIX, we `mlock(2)` the memory backing the `&'static str`
- when using the `&'static str` when building an HTTP request, we use
`HeaderValue::from_static()` to avoid copying the `&str`
- we also invoke `.set_sensitive(true)` on the `HeaderValue`, which in
theory indicates to other parts of the HTTP stack that the header should
be treated with "special care" to avoid leakage:


439d1c50d7/src/header/value.rs (L346-L376)
2025-09-26 08:19:00 -07:00
jif-oai
8797145678 fix: token usage for compaction (#4281)
Emit token usage update when draining compaction
2025-09-26 16:24:27 +02:00
Ahmed Ibrahim
a53720e278 Show exec output on success with trimmed display (#4113)
- Refactor Exec Cell into its own module
- update exec command rendering to inline the first command line
- limit continuation lines
- always show trimmed output
2025-09-26 07:13:44 -07:00
Ahmed Ibrahim
41f5d61f24 Move approvals to use ListSelectionView (#4275)
Unify selection menus:
- Move approvals to the vertical menu `ListSelectionView`
- Add header section to `ListSelectionView`

<img width="502" height="214" alt="image"
src="https://github.com/user-attachments/assets/f4b43ddf-3549-403c-ad9e-a523688714e4"
/>

<img width="748" height="214" alt="image"
src="https://github.com/user-attachments/assets/f94ac7b5-dc94-4dc0-a1df-7a8e3ba2453b"
/>

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-09-26 07:13:29 -07:00
Ahmed Ibrahim
02609184be Refactor the footer logic to a new file (#4259)
This will help us have more control over the footer

---------

Co-authored-by: pakrym-oai <pakrym@openai.com>
2025-09-26 07:13:13 -07:00
225 changed files with 11443 additions and 7179 deletions

View File

@@ -27,6 +27,34 @@
"path": "codex.exe"
}
}
},
"codex-responses-api-proxy": {
"platforms": {
"macos-aarch64": {
"regex": "^codex-responses-api-proxy-aarch64-apple-darwin\\.zst$",
"path": "codex-responses-api-proxy"
},
"macos-x86_64": {
"regex": "^codex-responses-api-proxy-x86_64-apple-darwin\\.zst$",
"path": "codex-responses-api-proxy"
},
"linux-x86_64": {
"regex": "^codex-responses-api-proxy-x86_64-unknown-linux-musl\\.zst$",
"path": "codex-responses-api-proxy"
},
"linux-aarch64": {
"regex": "^codex-responses-api-proxy-aarch64-unknown-linux-musl\\.zst$",
"path": "codex-responses-api-proxy"
},
"windows-x86_64": {
"regex": "^codex-responses-api-proxy-x86_64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex-responses-api-proxy.exe"
},
"windows-aarch64": {
"regex": "^codex-responses-api-proxy-aarch64-pc-windows-msvc\\.exe\\.zst$",
"path": "codex-responses-api-proxy.exe"
}
}
}
}
}

View File

@@ -1,7 +1,7 @@
name: ci
on:
pull_request: { branches: [main] }
pull_request: {}
push: { branches: [main] }
jobs:
@@ -31,6 +31,7 @@ jobs:
- uses: facebook/install-dotslash@v2
- name: Stage npm package
id: stage_npm_package
env:
GH_TOKEN: ${{ github.token }}
run: |
@@ -40,13 +41,13 @@ jobs:
python3 ./codex-cli/scripts/build_npm_package.py \
--release-version "$CODEX_VERSION" \
--pack-output "$PACK_OUTPUT"
echo "PACK_OUTPUT=$PACK_OUTPUT" >> "$GITHUB_ENV"
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
uses: actions/upload-artifact@v4
with:
name: codex-npm-staging
path: ${{ env.PACK_OUTPUT }}
path: ${{ steps.stage_npm_package.outputs.pack_output }}
- name: Ensure root README.md contains only ASCII and certain Unicode code points
run: ./scripts/asciicheck.py README.md

View File

@@ -97,7 +97,7 @@ jobs:
sudo apt install -y musl-tools pkg-config
- name: Cargo build
run: cargo build --target ${{ matrix.target }} --release --bin codex
run: cargo build --target ${{ matrix.target }} --release --bin codex --bin codex-responses-api-proxy
- name: Stage artifacts
shell: bash
@@ -107,8 +107,10 @@ jobs:
if [[ "${{ matrix.runner }}" == windows* ]]; then
cp target/${{ matrix.target }}/release/codex.exe "$dest/codex-${{ matrix.target }}.exe"
cp target/${{ matrix.target }}/release/codex-responses-api-proxy.exe "$dest/codex-responses-api-proxy-${{ matrix.target }}.exe"
else
cp target/${{ matrix.target }}/release/codex "$dest/codex-${{ matrix.target }}"
cp target/${{ matrix.target }}/release/codex-responses-api-proxy "$dest/codex-responses-api-proxy-${{ matrix.target }}"
fi
- if: ${{ matrix.runner == 'windows-11-arm' }}
@@ -216,17 +218,30 @@ jobs:
# build_npm_package.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@v2
- name: Stage npm package
- name: Stage codex CLI npm package
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage"
./codex-cli/scripts/build_npm_package.py \
--package codex \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${{ steps.release_name.outputs.name }}.tgz"
- name: Stage responses API proxy npm package
env:
GH_TOKEN: ${{ github.token }}
run: |
set -euo pipefail
TMP_DIR="${RUNNER_TEMP}/npm-stage-responses"
./codex-cli/scripts/build_npm_package.py \
--package codex-responses-api-proxy \
--release-version "${{ steps.release_name.outputs.name }}" \
--staging-dir "${TMP_DIR}" \
--pack-output "${GITHUB_WORKSPACE}/dist/npm/codex-responses-api-proxy-npm-${{ steps.release_name.outputs.name }}.tgz"
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
with:
@@ -269,7 +284,7 @@ jobs:
- name: Update npm
run: npm install -g npm@latest
- name: Download npm tarball from release
- name: Download npm tarballs from release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
@@ -281,6 +296,10 @@ jobs:
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-npm-${version}.tgz" \
--dir dist/npm
gh release download "$tag" \
--repo "${GITHUB_REPOSITORY}" \
--pattern "codex-responses-api-proxy-npm-${version}.tgz" \
--dir dist/npm
# No NODE_AUTH_TOKEN needed because we use OIDC.
- name: Publish to npm
@@ -294,7 +313,14 @@ jobs:
tag_args+=(--tag "${NPM_TAG}")
fi
npm publish "${GITHUB_WORKSPACE}/dist/npm/codex-npm-${VERSION}.tgz" "${tag_args[@]}"
tarballs=(
"codex-npm-${VERSION}.tgz"
"codex-responses-api-proxy-npm-${VERSION}.tgz"
)
for tarball in "${tarballs[@]}"; do
npm publish "${GITHUB_WORKSPACE}/dist/npm/${tarball}" "${tag_args[@]}"
done
update-branch:
name: Update latest-alpha-cli branch

View File

@@ -1,4 +1,3 @@
<h1 align="center">OpenAI Codex CLI</h1>
<p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>
@@ -102,4 +101,3 @@ Codex CLI supports a rich set of configuration options, with preferences stored
## License
This repository is licensed under the [Apache-2.0 License](LICENSE).

View File

@@ -208,7 +208,7 @@ The hardening mechanism Codex uses depends on your OS:
| Requirement | Details |
| --------------------------- | --------------------------------------------------------------- |
| Operating systems | macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 **via WSL2** |
| Node.js | **22 or newer** (LTS recommended) |
| Node.js | **16 or newer** (Node 20 LTS recommended) |
| Git (optional, recommended) | 2.23+ for built-in PR helpers |
| RAM | 4-GB minimum (8-GB recommended) |
@@ -513,7 +513,7 @@ Codex runs model-generated commands in a sandbox. If a proposed command or file
<details>
<summary>Does it work on Windows?</summary>
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex has been tested on macOS and Linux with Node 22.
Not directly. It requires [Windows Subsystem for Linux (WSL2)](https://learn.microsoft.com/en-us/windows/wsl/install) - Codex is regularly tested on macOS and Linux with Node 20+, and also supports Node 16.
</details>

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env node
// Unified entry point for the Codex CLI.
import { spawn } from "node:child_process";
import { existsSync } from "fs";
import path from "path";
import { fileURLToPath } from "url";
@@ -68,7 +69,6 @@ const binaryPath = path.join(archRoot, "codex", codexBinaryName);
// executing. This allows us to forward those signals to the child process
// and guarantees that when either the child terminates or the parent
// receives a fatal signal, both processes exit in a predictable manner.
const { spawn } = await import("child_process");
function getUpdatedPath(newDirs) {
const pathSep = process.platform === "win32" ? ";" : ":";

View File

@@ -11,7 +11,7 @@
"codex": "bin/codex.js"
},
"engines": {
"node": ">=20"
"node": ">=16"
}
}
}

View File

@@ -7,7 +7,7 @@
},
"type": "module",
"engines": {
"node": ">=20"
"node": ">=16"
},
"files": [
"bin",

View File

@@ -13,6 +13,7 @@ from pathlib import Path
SCRIPT_DIR = Path(__file__).resolve().parent
CODEX_CLI_ROOT = SCRIPT_DIR.parent
REPO_ROOT = CODEX_CLI_ROOT.parent
RESPONSES_API_PROXY_NPM_ROOT = REPO_ROOT / "codex-rs" / "responses-api-proxy" / "npm"
GITHUB_REPO = "openai/codex"
# The docs are not clear on what the expected value/format of
@@ -23,6 +24,12 @@ WORKFLOW_NAME = ".github/workflows/rust-release.yml"
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Build or stage the Codex CLI npm package.")
parser.add_argument(
"--package",
choices=("codex", "codex-responses-api-proxy"),
default="codex",
help="Which npm package to stage (default: codex).",
)
parser.add_argument(
"--version",
help="Version number to write to package.json inside the staged package.",
@@ -63,6 +70,7 @@ def parse_args() -> argparse.Namespace:
def main() -> int:
args = parse_args()
package = args.package
version = args.version
release_version = args.release_version
if release_version:
@@ -76,7 +84,7 @@ def main() -> int:
staging_dir, created_temp = prepare_staging_dir(args.staging_dir)
try:
stage_sources(staging_dir, version)
stage_sources(staging_dir, version, package)
workflow_url = args.workflow_url
resolved_head_sha: str | None = None
@@ -100,16 +108,23 @@ def main() -> int:
if not workflow_url:
raise RuntimeError("Unable to determine workflow URL for native binaries.")
install_native_binaries(staging_dir, workflow_url)
install_native_binaries(staging_dir, workflow_url, package)
if release_version:
staging_dir_str = str(staging_dir)
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the CLI:\n"
f" node {staging_dir_str}/bin/codex.js --version\n"
f" node {staging_dir_str}/bin/codex.js --help\n\n"
)
if package == "codex":
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the CLI:\n"
f" node {staging_dir_str}/bin/codex.js --version\n"
f" node {staging_dir_str}/bin/codex.js --help\n\n"
)
else:
print(
f"Staged version {version} for release in {staging_dir_str}\n\n"
"Verify the responses API proxy:\n"
f" node {staging_dir_str}/bin/codex-responses-api-proxy.js --help\n\n"
)
else:
print(f"Staged package in {staging_dir}")
@@ -136,20 +151,34 @@ def prepare_staging_dir(staging_dir: Path | None) -> tuple[Path, bool]:
return temp_dir, True
def stage_sources(staging_dir: Path, version: str) -> None:
def stage_sources(staging_dir: Path, version: str, package: str) -> None:
bin_dir = staging_dir / "bin"
bin_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(CODEX_CLI_ROOT / "bin" / "codex.js", bin_dir / "codex.js")
rg_manifest = CODEX_CLI_ROOT / "bin" / "rg"
if rg_manifest.exists():
shutil.copy2(rg_manifest, bin_dir / "rg")
if package == "codex":
shutil.copy2(CODEX_CLI_ROOT / "bin" / "codex.js", bin_dir / "codex.js")
rg_manifest = CODEX_CLI_ROOT / "bin" / "rg"
if rg_manifest.exists():
shutil.copy2(rg_manifest, bin_dir / "rg")
readme_src = REPO_ROOT / "README.md"
if readme_src.exists():
shutil.copy2(readme_src, staging_dir / "README.md")
readme_src = REPO_ROOT / "README.md"
if readme_src.exists():
shutil.copy2(readme_src, staging_dir / "README.md")
with open(CODEX_CLI_ROOT / "package.json", "r", encoding="utf-8") as fh:
package_json_path = CODEX_CLI_ROOT / "package.json"
elif package == "codex-responses-api-proxy":
launcher_src = RESPONSES_API_PROXY_NPM_ROOT / "bin" / "codex-responses-api-proxy.js"
shutil.copy2(launcher_src, bin_dir / "codex-responses-api-proxy.js")
readme_src = RESPONSES_API_PROXY_NPM_ROOT / "README.md"
if readme_src.exists():
shutil.copy2(readme_src, staging_dir / "README.md")
package_json_path = RESPONSES_API_PROXY_NPM_ROOT / "package.json"
else:
raise RuntimeError(f"Unknown package '{package}'.")
with open(package_json_path, "r", encoding="utf-8") as fh:
package_json = json.load(fh)
package_json["version"] = version
@@ -158,10 +187,19 @@ def stage_sources(staging_dir: Path, version: str) -> None:
out.write("\n")
def install_native_binaries(staging_dir: Path, workflow_url: str | None) -> None:
cmd = ["./scripts/install_native_deps.py"]
if workflow_url:
cmd.extend(["--workflow-url", workflow_url])
def install_native_binaries(staging_dir: Path, workflow_url: str, package: str) -> None:
package_components = {
"codex": ["codex", "rg"],
"codex-responses-api-proxy": ["codex-responses-api-proxy"],
}
components = package_components.get(package)
if components is None:
raise RuntimeError(f"Unknown package '{package}'.")
cmd = ["./scripts/install_native_deps.py", "--workflow-url", workflow_url]
for component in components:
cmd.extend(["--component", component])
cmd.append(str(staging_dir))
subprocess.check_call(cmd, cwd=CODEX_CLI_ROOT)

View File

@@ -9,6 +9,7 @@ import subprocess
import tarfile
import tempfile
import zipfile
from dataclasses import dataclass
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
from typing import Iterable, Sequence
@@ -20,7 +21,7 @@ CODEX_CLI_ROOT = SCRIPT_DIR.parent
DEFAULT_WORKFLOW_URL = "https://github.com/openai/codex/actions/runs/17952349351" # rust-v0.40.0
VENDOR_DIR_NAME = "vendor"
RG_MANIFEST = CODEX_CLI_ROOT / "bin" / "rg"
CODEX_TARGETS = (
BINARY_TARGETS = (
"x86_64-unknown-linux-musl",
"aarch64-unknown-linux-musl",
"x86_64-apple-darwin",
@@ -29,6 +30,27 @@ CODEX_TARGETS = (
"aarch64-pc-windows-msvc",
)
@dataclass(frozen=True)
class BinaryComponent:
artifact_prefix: str # matches the artifact filename prefix (e.g. codex-<target>.zst)
dest_dir: str # directory under vendor/<target>/ where the binary is installed
binary_basename: str # executable name inside dest_dir (before optional .exe)
BINARY_COMPONENTS = {
"codex": BinaryComponent(
artifact_prefix="codex",
dest_dir="codex",
binary_basename="codex",
),
"codex-responses-api-proxy": BinaryComponent(
artifact_prefix="codex-responses-api-proxy",
dest_dir="codex-responses-api-proxy",
binary_basename="codex-responses-api-proxy",
),
}
RG_TARGET_PLATFORM_PAIRS: list[tuple[str, str]] = [
("x86_64-unknown-linux-musl", "linux-x86_64"),
("aarch64-unknown-linux-musl", "linux-aarch64"),
@@ -50,6 +72,16 @@ def parse_args() -> argparse.Namespace:
"known good run when omitted."
),
)
parser.add_argument(
"--component",
dest="components",
action="append",
choices=tuple(list(BINARY_COMPONENTS) + ["rg"]),
help=(
"Limit installation to the specified components."
" May be repeated. Defaults to 'codex' and 'rg'."
),
)
parser.add_argument(
"root",
nargs="?",
@@ -69,18 +101,28 @@ def main() -> int:
vendor_dir = codex_cli_root / VENDOR_DIR_NAME
vendor_dir.mkdir(parents=True, exist_ok=True)
components = args.components or ["codex", "rg"]
workflow_url = (args.workflow_url or DEFAULT_WORKFLOW_URL).strip()
if not workflow_url:
workflow_url = DEFAULT_WORKFLOW_URL
workflow_id = workflow_url.rstrip("/").split("/")[-1]
print(f"Downloading native artifacts from workflow {workflow_id}...")
with tempfile.TemporaryDirectory(prefix="codex-native-artifacts-") as artifacts_dir_str:
artifacts_dir = Path(artifacts_dir_str)
_download_artifacts(workflow_id, artifacts_dir)
install_codex_binaries(artifacts_dir, vendor_dir, CODEX_TARGETS)
install_binary_components(
artifacts_dir,
vendor_dir,
BINARY_TARGETS,
[name for name in components if name in BINARY_COMPONENTS],
)
fetch_rg(vendor_dir, DEFAULT_RG_TARGETS, manifest_path=RG_MANIFEST)
if "rg" in components:
print("Fetching ripgrep binaries...")
fetch_rg(vendor_dir, DEFAULT_RG_TARGETS, manifest_path=RG_MANIFEST)
print(f"Installed native dependencies into {vendor_dir}")
return 0
@@ -124,6 +166,8 @@ def fetch_rg(
results: dict[str, Path] = {}
max_workers = min(len(task_configs), max(1, (os.cpu_count() or 1)))
print("Installing ripgrep binaries for targets: " + ", ".join(targets))
with ThreadPoolExecutor(max_workers=max_workers) as executor:
future_map = {
executor.submit(
@@ -140,6 +184,7 @@ def fetch_rg(
for future in as_completed(future_map):
target = future_map[future]
results[target] = future.result()
print(f" installed ripgrep for {target}")
return [results[target] for target in targets]
@@ -158,40 +203,60 @@ def _download_artifacts(workflow_id: str, dest_dir: Path) -> None:
subprocess.check_call(cmd)
def install_codex_binaries(
artifacts_dir: Path, vendor_dir: Path, targets: Iterable[str]
) -> list[Path]:
def install_binary_components(
artifacts_dir: Path,
vendor_dir: Path,
targets: Iterable[str],
component_names: Sequence[str],
) -> None:
selected_components = [BINARY_COMPONENTS[name] for name in component_names if name in BINARY_COMPONENTS]
if not selected_components:
return
targets = list(targets)
if not targets:
return []
return
results: dict[str, Path] = {}
max_workers = min(len(targets), max(1, (os.cpu_count() or 1)))
with ThreadPoolExecutor(max_workers=max_workers) as executor:
future_map = {
executor.submit(_install_single_codex_binary, artifacts_dir, vendor_dir, target): target
for target in targets
}
for future in as_completed(future_map):
target = future_map[future]
results[target] = future.result()
return [results[target] for target in targets]
for component in selected_components:
print(
f"Installing {component.binary_basename} binaries for targets: "
+ ", ".join(targets)
)
max_workers = min(len(targets), max(1, (os.cpu_count() or 1)))
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = {
executor.submit(
_install_single_binary,
artifacts_dir,
vendor_dir,
target,
component,
): target
for target in targets
}
for future in as_completed(futures):
installed_path = future.result()
print(f" installed {installed_path}")
def _install_single_codex_binary(artifacts_dir: Path, vendor_dir: Path, target: str) -> Path:
def _install_single_binary(
artifacts_dir: Path,
vendor_dir: Path,
target: str,
component: BinaryComponent,
) -> Path:
artifact_subdir = artifacts_dir / target
archive_name = _archive_name_for_target(target)
archive_name = _archive_name_for_target(component.artifact_prefix, target)
archive_path = artifact_subdir / archive_name
if not archive_path.exists():
raise FileNotFoundError(f"Expected artifact not found: {archive_path}")
dest_dir = vendor_dir / target / "codex"
dest_dir = vendor_dir / target / component.dest_dir
dest_dir.mkdir(parents=True, exist_ok=True)
binary_name = "codex.exe" if "windows" in target else "codex"
binary_name = (
f"{component.binary_basename}.exe" if "windows" in target else component.binary_basename
)
dest = dest_dir / binary_name
dest.unlink(missing_ok=True)
extract_archive(archive_path, "zst", None, dest)
@@ -200,10 +265,10 @@ def _install_single_codex_binary(artifacts_dir: Path, vendor_dir: Path, target:
return dest
def _archive_name_for_target(target: str) -> str:
def _archive_name_for_target(artifact_prefix: str, target: str) -> str:
if "windows" in target:
return f"codex-{target}.exe.zst"
return f"codex-{target}.zst"
return f"{artifact_prefix}-{target}.exe.zst"
return f"{artifact_prefix}-{target}.zst"
def _fetch_single_rg(

465
codex-rs/Cargo.lock generated
View File

@@ -333,6 +333,54 @@ version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
[[package]]
name = "axum"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "021e862c184ae977658b36c4500f7feac3221ca5da43e3f25bd04ab6c79a29b5"
dependencies = [
"axum-core",
"bytes",
"futures-util",
"http",
"http-body",
"http-body-util",
"hyper",
"hyper-util",
"itoa",
"matchit",
"memchr",
"mime",
"percent-encoding",
"pin-project-lite",
"rustversion",
"serde",
"sync_wrapper",
"tokio",
"tower",
"tower-layer",
"tower-service",
]
[[package]]
name = "axum-core"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68464cd0412f486726fb3373129ef5d2993f90c34bc2bc1c1e9943b2f4fc7ca6"
dependencies = [
"bytes",
"futures-core",
"http",
"http-body",
"http-body-util",
"mime",
"pin-project-lite",
"rustversion",
"sync_wrapper",
"tower-layer",
"tower-service",
]
[[package]]
name = "backtrace"
version = "0.3.75"
@@ -488,6 +536,12 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd16c4719339c4530435d38e511904438d07cce7950afa3718a84ac36c10e89e"
[[package]]
name = "cfg_aliases"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chrono"
version = "0.4.42"
@@ -573,38 +627,6 @@ version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9b18233253483ce2f65329a24072ec414db782531bdbb7d0bbc4bd2ce6b7e21"
[[package]]
name = "codex-agent"
version = "0.0.0"
dependencies = [
"anyhow",
"async-trait",
"base64",
"codex-apply-patch",
"codex-file-search",
"codex-protocol",
"core_test_support",
"libc",
"mcp-types",
"portable-pty",
"pretty_assertions",
"serde",
"serde_json",
"sha1",
"shlex",
"similar",
"tempfile",
"thiserror 2.0.16",
"time",
"tokio",
"tracing",
"tree-sitter",
"tree-sitter-bash",
"uuid",
"which",
"wildmatch",
]
[[package]]
name = "codex-ansi-escape"
version = "0.0.0"
@@ -670,11 +692,11 @@ dependencies = [
"codex-exec",
"codex-login",
"codex-mcp-server",
"codex-process-hardening",
"codex-protocol",
"codex-protocol-ts",
"codex-tui",
"ctor 0.5.0",
"libc",
"owo-colors",
"predicates",
"pretty_assertions",
@@ -709,14 +731,15 @@ dependencies = [
"base64",
"bytes",
"chrono",
"codex-agent",
"codex-apply-patch",
"codex-file-search",
"codex-mcp-client",
"codex-protocol",
"codex-rmcp-client",
"core_test_support",
"dirs",
"env-flags",
"escargot",
"eventsource-stream",
"futures",
"indexmap 2.10.0",
@@ -748,6 +771,8 @@ dependencies = [
"toml",
"toml_edit",
"tracing",
"tree-sitter",
"tree-sitter-bash",
"uuid",
"walkdir",
"which",
@@ -923,6 +948,13 @@ dependencies = [
"wiremock",
]
[[package]]
name = "codex-process-hardening"
version = "0.0.0"
dependencies = [
"libc",
]
[[package]]
name = "codex-protocol"
version = "0.0.0"
@@ -957,6 +989,39 @@ dependencies = [
"ts-rs",
]
[[package]]
name = "codex-responses-api-proxy"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"codex-process-hardening",
"ctor 0.5.0",
"libc",
"reqwest",
"serde",
"serde_json",
"tiny_http",
"zeroize",
]
[[package]]
name = "codex-rmcp-client"
version = "0.0.0"
dependencies = [
"anyhow",
"axum",
"futures",
"mcp-types",
"pretty_assertions",
"reqwest",
"rmcp",
"serde",
"serde_json",
"tokio",
"tracing",
]
[[package]]
name = "codex-tui"
version = "0.0.0"
@@ -1274,8 +1339,18 @@ version = "0.20.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc7f46116c46ff9ab3eb1597a45688b6715c6e628b5c133e288e709a29bcb4ee"
dependencies = [
"darling_core",
"darling_macro",
"darling_core 0.20.11",
"darling_macro 0.20.11",
]
[[package]]
name = "darling"
version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cdf337090841a411e2a7f3deb9187445851f91b309c0c0a29e05f74a00a48c0"
dependencies = [
"darling_core 0.21.3",
"darling_macro 0.21.3",
]
[[package]]
@@ -1292,13 +1367,38 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "darling_core"
version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1247195ecd7e3c85f83c8d2a366e4210d588e802133e1e355180a9870b517ea4"
dependencies = [
"fnv",
"ident_case",
"proc-macro2",
"quote",
"strsim 0.11.1",
"syn 2.0.104",
]
[[package]]
name = "darling_macro"
version = "0.20.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc34b93ccb385b40dc71c6fceac4b2ad23662c7eeb248cf10d529b7e055b6ead"
dependencies = [
"darling_core",
"darling_core 0.20.11",
"quote",
"syn 2.0.104",
]
[[package]]
name = "darling_macro"
version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d38308df82d1080de0afee5d069fa14b0326a88c14f15c5ccda35b4a6c414c81"
dependencies = [
"darling_core 0.21.3",
"quote",
"syn 2.0.104",
]
@@ -1675,6 +1775,17 @@ version = "3.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dea2df4cf52843e0452895c455a1a2cfbb842a1e7329671acf418fdc53ed4c59"
[[package]]
name = "escargot"
version = "0.5.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11c3aea32bc97b500c9ca6a72b768a26e558264303d101d3409cf6d57a9ed0cf"
dependencies = [
"log",
"serde",
"serde_json",
]
[[package]]
name = "event-listener"
version = "5.4.0"
@@ -1980,8 +2091,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "335ff9f135e4384c8150d6f27c6daed433577f86b4750418338c01a1a2528592"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"wasi 0.11.1+wasi-snapshot-preview1",
"wasm-bindgen",
]
[[package]]
@@ -1991,9 +2104,11 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
dependencies = [
"cfg-if",
"js-sys",
"libc",
"r-efi",
"wasi 0.14.2+wasi-0.2.4",
"wasm-bindgen",
]
[[package]]
@@ -2190,6 +2305,7 @@ dependencies = [
"tokio",
"tokio-rustls",
"tower-service",
"webpki-roots",
]
[[package]]
@@ -2499,7 +2615,7 @@ version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "435d80800b936787d62688c927b6490e887c7ef5ff9ce922c6c6050fca75eb9a"
dependencies = [
"darling",
"darling 0.20.11",
"indoc",
"proc-macro2",
"quote",
@@ -2790,6 +2906,12 @@ dependencies = [
"hashbrown 0.15.4",
]
[[package]]
name = "lru-slab"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154"
[[package]]
name = "lsp-types"
version = "0.94.1"
@@ -2818,6 +2940,12 @@ dependencies = [
"regex-automata",
]
[[package]]
name = "matchit"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
[[package]]
name = "mcp-types"
version = "0.0.0"
@@ -2969,7 +3097,19 @@ checksum = "ab2156c4fce2f8df6c499cc1c763e4394b7482525bf2a9701c9d79d215f519e4"
dependencies = [
"bitflags 2.9.1",
"cfg-if",
"cfg_aliases",
"cfg_aliases 0.1.1",
"libc",
]
[[package]]
name = "nix"
version = "0.30.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74523f3a35e05aba87a1d978330aef40f67b0304ac79c1c00b294c9830543db6"
dependencies = [
"bitflags 2.9.1",
"cfg-if",
"cfg_aliases 0.2.1",
"libc",
]
@@ -3395,7 +3535,7 @@ dependencies = [
"lazy_static",
"libc",
"log",
"nix",
"nix 0.28.0",
"serial2",
"shared_library",
"shell-words",
@@ -3483,6 +3623,20 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "process-wrap"
version = "8.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3ef4f2f0422f23a82ec9f628ea2acd12871c81a9362b02c43c1aa86acfc3ba1"
dependencies = [
"futures",
"indexmap 2.10.0",
"nix 0.30.1",
"tokio",
"tracing",
"windows",
]
[[package]]
name = "pulldown-cmark"
version = "0.10.3"
@@ -3526,6 +3680,61 @@ dependencies = [
"memchr",
]
[[package]]
name = "quinn"
version = "0.11.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9e20a958963c291dc322d98411f541009df2ced7b5a4f2bd52337638cfccf20"
dependencies = [
"bytes",
"cfg_aliases 0.2.1",
"pin-project-lite",
"quinn-proto",
"quinn-udp",
"rustc-hash",
"rustls",
"socket2",
"thiserror 2.0.16",
"tokio",
"tracing",
"web-time",
]
[[package]]
name = "quinn-proto"
version = "0.11.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31"
dependencies = [
"bytes",
"getrandom 0.3.3",
"lru-slab",
"rand",
"ring",
"rustc-hash",
"rustls",
"rustls-pki-types",
"slab",
"thiserror 2.0.16",
"tinyvec",
"tracing",
"web-time",
]
[[package]]
name = "quinn-udp"
version = "0.5.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "addec6a0dcad8a8d96a771f815f0eaf55f9d1805756410b39f5fa81332574cbd"
dependencies = [
"cfg_aliases 0.2.1",
"libc",
"once_cell",
"socket2",
"tracing",
"windows-sys 0.60.2",
]
[[package]]
name = "quote"
version = "1.0.40"
@@ -3718,6 +3927,8 @@ dependencies = [
"native-tls",
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls",
"rustls-pki-types",
"serde",
"serde_json",
@@ -3725,6 +3936,7 @@ dependencies = [
"sync_wrapper",
"tokio",
"tokio-native-tls",
"tokio-rustls",
"tokio-util",
"tower",
"tower-http",
@@ -3734,6 +3946,7 @@ dependencies = [
"wasm-bindgen-futures",
"wasm-streams",
"web-sys",
"webpki-roots",
]
[[package]]
@@ -3750,12 +3963,63 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "rmcp"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "534fd1cd0601e798ac30545ff2b7f4a62c6f14edd4aaed1cc5eb1e85f69f09af"
dependencies = [
"base64",
"bytes",
"chrono",
"futures",
"http",
"http-body",
"http-body-util",
"paste",
"pin-project-lite",
"process-wrap",
"rand",
"reqwest",
"rmcp-macros",
"schemars 1.0.4",
"serde",
"serde_json",
"sse-stream",
"thiserror 2.0.16",
"tokio",
"tokio-stream",
"tokio-util",
"tower-service",
"tracing",
"uuid",
]
[[package]]
name = "rmcp-macros"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ba777eb0e5f53a757e36f0e287441da0ab766564ba7201600eeb92a4753022e"
dependencies = [
"darling 0.21.3",
"proc-macro2",
"quote",
"serde_json",
"syn 2.0.104",
]
[[package]]
name = "rustc-demangle"
version = "0.1.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "989e6739f80c4ad5b13e0fd7fe89531180375b18520cc8c82080e4dc4035b84f"
[[package]]
name = "rustc-hash"
version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d"
[[package]]
name = "rustix"
version = "0.38.44"
@@ -3789,6 +4053,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2491382039b29b9b11ff08b76ff6c97cf287671dbb74f0be44bda389fffe9bd1"
dependencies = [
"once_cell",
"ring",
"rustls-pki-types",
"rustls-webpki",
"subtle",
@@ -3801,6 +4066,7 @@ version = "1.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "229a4a4c221013e7e1f1a043678c5cc39fe5171437c88fb47151a21e6f5b5c79"
dependencies = [
"web-time",
"zeroize",
]
@@ -3835,7 +4101,7 @@ dependencies = [
"libc",
"log",
"memchr",
"nix",
"nix 0.28.0",
"radix_trie",
"unicode-segmentation",
"unicode-width 0.1.14",
@@ -3916,7 +4182,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fbf2ae1b8bc8e02df939598064d22402220cd5bbcca1c76f7d6a310974d5615"
dependencies = [
"dyn-clone",
"schemars_derive",
"schemars_derive 0.8.22",
"serde",
"serde_json",
]
@@ -3939,8 +4205,10 @@ version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "82d20c4491bc164fa2f6c5d44565947a52ad80b9505d8e36f8d54c27c739fcd0"
dependencies = [
"chrono",
"dyn-clone",
"ref-cast",
"schemars_derive 1.0.4",
"serde",
"serde_json",
]
@@ -3957,6 +4225,18 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "schemars_derive"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33d020396d1d138dc19f1165df7545479dcd58d93810dc5d646a16e55abefa80"
dependencies = [
"proc-macro2",
"quote",
"serde_derive_internals",
"syn 2.0.104",
]
[[package]]
name = "scopeguard"
version = "1.2.0"
@@ -4108,7 +4388,7 @@ version = "3.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "de90945e6565ce0d9a25098082ed4ee4002e047cb59892c318d66821e14bb30f"
dependencies = [
"darling",
"darling 0.20.11",
"proc-macro2",
"quote",
"syn 2.0.104",
@@ -4260,6 +4540,19 @@ dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "sse-stream"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb4dc4d33c68ec1f27d386b5610a351922656e1fdf5c05bbaad930cd1519479a"
dependencies = [
"bytes",
"futures-util",
"http-body",
"http-body-util",
"pin-project-lite",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.0"
@@ -4715,6 +5008,21 @@ dependencies = [
"zerovec",
]
[[package]]
name = "tinyvec"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfa5fdc3bce6191a1dbc8c02d5c8bffcf557bafa17c124c5264a458f1b0613fa"
dependencies = [
"tinyvec_macros",
]
[[package]]
name = "tinyvec_macros"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.47.1"
@@ -5327,6 +5635,16 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "web-time"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb"
dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "webbrowser"
version = "1.0.5"
@@ -5343,6 +5661,15 @@ dependencies = [
"web-sys",
]
[[package]]
name = "webpki-roots"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e8983c3ab33d6fb807cfcdad2491c4ea8cbc8ed839181c7dfd9c67c83e261b2"
dependencies = [
"rustls-pki-types",
]
[[package]]
name = "weezl"
version = "0.1.10"
@@ -5398,6 +5725,28 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows"
version = "0.61.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9babd3a767a4c1aef6900409f85f5d53ce2544ccdfaa86dad48c91782c6d6893"
dependencies = [
"windows-collections",
"windows-core",
"windows-future",
"windows-link 0.1.3",
"windows-numerics",
]
[[package]]
name = "windows-collections"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3beeceb5e5cfd9eb1d76b381630e82c4241ccd0d27f1a39ed41b2760b255c5e8"
dependencies = [
"windows-core",
]
[[package]]
name = "windows-core"
version = "0.61.2"
@@ -5411,6 +5760,17 @@ dependencies = [
"windows-strings",
]
[[package]]
name = "windows-future"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc6a41e98427b19fe4b73c550f060b59fa592d7d686537eebf9385621bfbad8e"
dependencies = [
"windows-core",
"windows-link 0.1.3",
"windows-threading",
]
[[package]]
name = "windows-implement"
version = "0.60.0"
@@ -5445,6 +5805,16 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "45e46c0661abb7180e7b9c281db115305d49ca1709ab8242adf09666d2173c65"
[[package]]
name = "windows-numerics"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9150af68066c4c5c07ddc0ce30421554771e528bde427614c61038bc2c92c2b1"
dependencies = [
"windows-core",
"windows-link 0.1.3",
]
[[package]]
name = "windows-registry"
version = "0.5.3"
@@ -5572,6 +5942,15 @@ dependencies = [
"windows_x86_64_msvc 0.53.0",
]
[[package]]
name = "windows-threading"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b66463ad2e0ea3bbf808b7f1d371311c80e115c0b71d60efc142cafbcfb057a6"
dependencies = [
"windows-link 0.1.3",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.42.2"

View File

@@ -1,6 +1,5 @@
[workspace]
members = [
"agent",
"ansi-escape",
"apply-patch",
"arg0",
@@ -17,8 +16,11 @@ members = [
"mcp-server",
"mcp-types",
"ollama",
"process-hardening",
"protocol",
"protocol-ts",
"rmcp-client",
"responses-api-proxy",
"tui",
"utils/readiness",
]
@@ -34,7 +36,6 @@ edition = "2024"
[workspace.dependencies]
# Internal
codex-agent = { path = "agent" }
codex-ansi-escape = { path = "ansi-escape" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
@@ -49,8 +50,10 @@ codex-login = { path = "login" }
codex-mcp-client = { path = "mcp-client" }
codex-mcp-server = { path = "mcp-server" }
codex-ollama = { path = "ollama" }
codex-process-hardening = { path = "process-hardening" }
codex-protocol = { path = "protocol" }
codex-protocol-ts = { path = "protocol-ts" }
codex-rmcp-client = { path = "rmcp-client" }
codex-tui = { path = "tui" }
codex-utils-readiness = { path = "utils/readiness" }
core_test_support = { path = "core/tests/common" }
@@ -81,6 +84,7 @@ dirs = "6"
dotenvy = "0.15.7"
env-flags = "0.1.1"
env_logger = "0.11.5"
escargot = "0.5"
eventsource-stream = "0.2.3"
futures = "0.3"
icu_decimal = "2.0.0"
@@ -154,6 +158,7 @@ webbrowser = "1.0"
which = "6"
wildmatch = "2.5.0"
wiremock = "0.6"
zeroize = "1.8.1"
[workspace.lints]
rust = {}

View File

@@ -4,18 +4,18 @@ We provide Codex CLI as a standalone, native executable to ensure a zero-depende
## Installing Codex
Today, the easiest way to install Codex is via `npm`, though we plan to publish Codex to other package managers soon.
Today, the easiest way to install Codex is via `npm`:
```shell
npm i -g @openai/codex@native
npm i -g @openai/codex
codex
```
You can also download a platform-specific release directly from our [GitHub Releases](https://github.com/openai/codex/releases).
You can also install via Homebrew (`brew install codex`) or download a platform-specific release directly from our [GitHub Releases](https://github.com/openai/codex/releases).
## What's new in the Rust CLI
While we are [working to close the gap between the TypeScript and Rust implementations of Codex CLI](https://github.com/openai/codex/issues/1262), note that the Rust CLI has a number of features that the TypeScript CLI does not!
The Rust implementation is now the maintained Codex CLI and serves as the default experience. It includes a number of features that the legacy TypeScript CLI never supported.
### Config
@@ -97,7 +97,6 @@ The same setting can be persisted in `~/.codex/config.toml` via the top-level `s
This folder is the root of a Cargo workspace. It contains quite a bit of experimental code, but here are the key crates:
- [`core/`](./core) contains the business logic for Codex. Ultimately, we hope this to be a library crate that is generally useful for building other Rust/native applications that use Codex.
- [`docs/agent_runtime_baseline.md`](./docs/agent_runtime_baseline.md) documents the current agent runtime interfaces (`Codex`, `Session`, `SessionTask`) and links to the ongoing refactor plan in `agent_refactor.md`.
- [`exec/`](./exec) "headless" CLI for use in automation.
- [`tui/`](./tui) CLI that launches a fullscreen TUI built with [Ratatui](https://ratatui.rs/).
- [`cli/`](./cli) CLI multitool that provides the aforementioned CLIs via subcommands.

View File

@@ -1,37 +0,0 @@
[package]
name = "codex-agent"
version.workspace = true
edition.workspace = true
[dependencies]
anyhow = { workspace = true }
async-trait = { workspace = true }
codex-protocol = { workspace = true }
codex-apply-patch = { workspace = true }
mcp-types = { workspace = true }
base64 = { workspace = true }
serde_json = { workspace = true }
libc = { workspace = true }
portable-pty = { workspace = true }
serde = { workspace = true, features = ["derive"] }
sha1 = { workspace = true }
shlex = { workspace = true }
similar = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["macros", "process", "rt-multi-thread", "sync", "time"] }
uuid = { workspace = true, features = ["serde", "v4"] }
which = { workspace = true }
wildmatch = { workspace = true }
codex-file-search = { workspace = true }
time = { workspace = true, features = ["formatting", "parsing", "local-offset", "macros"] }
tracing = { workspace = true }
tree-sitter = { workspace = true }
tree-sitter-bash = { workspace = true }
[dev-dependencies]
core_test_support = { workspace = true }
tempfile = { workspace = true }
pretty_assertions = { workspace = true }
[lints]
workspace = true

View File

@@ -1,305 +0,0 @@
//! Shared configuration data structures for Codex runtime and hosts.
//
// This module intentionally focuses on simple data containers without
// business logic so they can be reused across crates.
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Deserializer;
use serde::Serialize;
use serde::de::Error as SerdeError;
#[derive(Serialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
pub command: String,
#[serde(default)]
pub args: Vec<String>,
#[serde(default)]
pub env: Option<HashMap<String, String>>,
/// Startup timeout in seconds for initializing MCP server & initially listing tools.
#[serde(
default,
with = "option_duration_secs",
skip_serializing_if = "Option::is_none"
)]
pub startup_timeout_sec: Option<Duration>,
/// Default timeout for MCP tool calls initiated via this server.
#[serde(default, with = "option_duration_secs")]
pub tool_timeout_sec: Option<Duration>,
}
impl<'de> Deserialize<'de> for McpServerConfig {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
struct RawMcpServerConfig {
command: String,
#[serde(default)]
args: Vec<String>,
#[serde(default)]
env: Option<HashMap<String, String>>,
#[serde(default)]
startup_timeout_sec: Option<f64>,
#[serde(default)]
startup_timeout_ms: Option<u64>,
#[serde(default, with = "option_duration_secs")]
tool_timeout_sec: Option<Duration>,
}
let raw = RawMcpServerConfig::deserialize(deserializer)?;
let startup_timeout_sec = match (raw.startup_timeout_sec, raw.startup_timeout_ms) {
(Some(sec), _) => {
let duration = Duration::try_from_secs_f64(sec).map_err(SerdeError::custom)?;
Some(duration)
}
(None, Some(ms)) => Some(Duration::from_millis(ms)),
(None, None) => None,
};
Ok(Self {
command: raw.command,
args: raw.args,
env: raw.env,
startup_timeout_sec,
tool_timeout_sec: raw.tool_timeout_sec,
})
}
}
mod option_duration_secs {
use serde::Deserialize;
use serde::Deserializer;
use serde::Serializer;
use std::time::Duration;
pub fn serialize<S>(value: &Option<Duration>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match value {
Some(duration) => serializer.serialize_some(&duration.as_secs_f64()),
None => serializer.serialize_none(),
}
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Option<Duration>, D::Error>
where
D: Deserializer<'de>,
{
let secs = Option::<f64>::deserialize(deserializer)?;
secs.map(|secs| Duration::try_from_secs_f64(secs).map_err(serde::de::Error::custom))
.transpose()
}
}
#[derive(Deserialize, Debug, Copy, Clone, PartialEq)]
pub enum UriBasedFileOpener {
#[serde(rename = "vscode")]
VsCode,
#[serde(rename = "vscode-insiders")]
VsCodeInsiders,
#[serde(rename = "windsurf")]
Windsurf,
#[serde(rename = "cursor")]
Cursor,
/// Option to disable the URI-based file opener.
#[serde(rename = "none")]
None,
}
impl UriBasedFileOpener {
pub fn get_scheme(&self) -> Option<&str> {
match self {
UriBasedFileOpener::VsCode => Some("vscode"),
UriBasedFileOpener::VsCodeInsiders => Some("vscode-insiders"),
UriBasedFileOpener::Windsurf => Some("windsurf"),
UriBasedFileOpener::Cursor => Some("cursor"),
UriBasedFileOpener::None => None,
}
}
}
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct History {
/// If true, history entries will not be written to disk.
pub persistence: HistoryPersistence,
/// If set, the maximum size of the history file in bytes.
/// TODO(mbolin): Not currently honored.
pub max_bytes: Option<usize>,
}
#[derive(Deserialize, Debug, Copy, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum HistoryPersistence {
/// Save all history entries to disk.
#[default]
SaveAll,
/// Do not write history to disk.
None,
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
#[serde(untagged)]
pub enum Notifications {
Enabled(bool),
Custom(Vec<String>),
}
impl Default for Notifications {
fn default() -> Self {
Self::Enabled(false)
}
}
/// Collection of settings that are specific to the TUI.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct Tui {
/// Enable desktop notifications from the TUI when the terminal is unfocused.
/// Defaults to `false`.
#[serde(default)]
pub notifications: Notifications,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct SandboxWorkspaceWrite {
#[serde(default)]
pub writable_roots: Vec<PathBuf>,
#[serde(default)]
pub network_access: bool,
#[serde(default)]
pub exclude_tmpdir_env_var: bool,
#[serde(default)]
pub exclude_slash_tmp: bool,
}
impl From<SandboxWorkspaceWrite> for codex_protocol::mcp_protocol::SandboxSettings {
fn from(sandbox_workspace_write: SandboxWorkspaceWrite) -> Self {
Self {
writable_roots: sandbox_workspace_write.writable_roots,
network_access: Some(sandbox_workspace_write.network_access),
exclude_tmpdir_env_var: Some(sandbox_workspace_write.exclude_tmpdir_env_var),
exclude_slash_tmp: Some(sandbox_workspace_write.exclude_slash_tmp),
}
}
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum ShellEnvironmentPolicyInherit {
/// "Core" environment variables for the platform. On UNIX, this would
/// include HOME, LOGNAME, PATH, SHELL, and USER, among others.
Core,
/// Inherits the full environment from the parent process.
#[default]
All,
/// Do not inherit any environment variables from the parent process.
None,
}
/// Policy for building the `env` when spawning a process via either the
/// `shell` or `local_shell` tool.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct ShellEnvironmentPolicyToml {
pub inherit: Option<ShellEnvironmentPolicyInherit>,
pub ignore_default_excludes: Option<bool>,
/// List of regular expressions.
pub exclude: Option<Vec<String>>,
pub r#set: Option<HashMap<String, String>>,
/// List of regular expressions.
pub include_only: Option<Vec<String>>,
pub experimental_use_profile: Option<bool>,
}
pub type EnvironmentVariablePattern = WildMatchPattern<'*', '?'>;
/// Deriving the `env` based on this policy works as follows:
/// 1. Create an initial map based on the `inherit` policy.
/// 2. If `ignore_default_excludes` is false, filter the map using the default
/// exclude pattern(s), which are: `"*KEY*"` and `"*TOKEN*"`.
/// 3. If `exclude` is not empty, filter the map using the provided patterns.
/// 4. Insert any entries from `r#set` into the map.
/// 5. If non-empty, filter the map using the `include_only` patterns.
#[derive(Debug, Clone, PartialEq, Default)]
pub struct ShellEnvironmentPolicy {
/// Starting point when building the environment.
pub inherit: ShellEnvironmentPolicyInherit,
/// True to skip the check to exclude default environment variables that
/// contain "KEY" or "TOKEN" in their name.
pub ignore_default_excludes: bool,
/// Environment variable names to exclude from the environment.
pub exclude: Vec<EnvironmentVariablePattern>,
/// (key, value) pairs to insert in the environment.
pub r#set: HashMap<String, String>,
/// Environment variable names to retain in the environment.
pub include_only: Vec<EnvironmentVariablePattern>,
/// If true, the shell profile will be used to run the command.
pub use_profile: bool,
}
impl From<ShellEnvironmentPolicyToml> for ShellEnvironmentPolicy {
fn from(toml: ShellEnvironmentPolicyToml) -> Self {
// Default to inheriting the full environment when not specified.
let inherit = toml.inherit.unwrap_or(ShellEnvironmentPolicyInherit::All);
let ignore_default_excludes = toml.ignore_default_excludes.unwrap_or(false);
let exclude = toml
.exclude
.unwrap_or_default()
.into_iter()
.map(|s| EnvironmentVariablePattern::new_case_insensitive(&s))
.collect();
let r#set = toml.r#set.unwrap_or_default();
let include_only = toml
.include_only
.unwrap_or_default()
.into_iter()
.map(|s| EnvironmentVariablePattern::new_case_insensitive(&s))
.collect();
let use_profile = toml.experimental_use_profile.unwrap_or(false);
Self {
inherit,
ignore_default_excludes,
exclude,
r#set,
include_only,
use_profile,
}
}
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq, Default, Hash)]
#[serde(rename_all = "kebab-case")]
pub enum ReasoningSummaryFormat {
#[default]
None,
Experimental,
}

View File

@@ -1,117 +0,0 @@
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history shared across agent hosts.
#[derive(Debug, Clone, Default)]
pub struct ConversationHistory {
/// Oldest items appear at the start of the vector.
items: Vec<ResponseItem>,
}
impl ConversationHistory {
pub fn new() -> Self {
Self { items: Vec::new() }
}
/// Returns a clone of the stored transcript.
pub fn contents(&self) -> Vec<ResponseItem> {
self.items.clone()
}
/// Records additional response items, filtering out non-API messages.
pub fn record_items<I>(&mut self, items: I)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
{
for item in items {
if !is_api_message(&item) {
continue;
}
self.items.push(item.clone());
}
}
pub fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
}
/// Detects whether the given message should be persisted to history.
fn is_api_message(message: &ResponseItem) -> bool {
match message {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Other => false,
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::models::ContentItem;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
#[test]
fn filters_non_api_messages() {
let mut h = ConversationHistory::default();
let system = ResponseItem::Message {
id: None,
role: "system".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
};
h.record_items([&system, &ResponseItem::Other]);
let u = user_msg("hi");
let a = assistant_msg("hello");
h.record_items([&u, &a]);
let items = h.contents();
assert_eq!(
items,
vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: "hi".to_string()
}]
},
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "hello".to_string()
}]
}
]
);
}
}

View File

@@ -1,11 +0,0 @@
mod exec_command_params;
mod exec_command_session;
mod session_id;
mod session_manager;
pub use exec_command_params::ExecCommandParams;
pub use exec_command_params::WriteStdinParams;
pub use exec_command_session::ExecCommandSession;
pub use session_id::SessionId;
pub use session_manager::ExecCommandOutput;
pub use session_manager::SessionManager as ExecSessionManager;

View File

@@ -1,7 +0,0 @@
use thiserror::Error;
#[derive(Debug, Error, PartialEq)]
pub enum FunctionCallError {
#[error("{0}")]
RespondToModel(String),
}

View File

@@ -1,48 +0,0 @@
pub mod apply_patch;
pub mod bash;
pub mod command_safety;
pub mod config_types;
pub mod conversation_history;
pub mod exec_command;
pub mod function_tool;
pub mod model_family;
pub mod model_provider;
pub mod notifications;
pub mod rollout;
pub mod runtime;
pub mod runtime_config;
pub mod safety;
pub mod sandbox;
pub mod services;
pub mod session_services;
pub mod session_state;
pub mod shell;
pub mod token_data;
pub mod tooling;
pub mod truncate;
pub mod turn_diff_tracker;
pub mod unified_exec;
pub use apply_patch::*;
pub use bash::*;
pub use command_safety::*;
pub use config_types::*;
pub use conversation_history::*;
pub use function_tool::*;
pub use model_family::*;
pub use model_provider::*;
pub use notifications::*;
pub use rollout::*;
pub use runtime::*;
pub use runtime_config::*;
pub use safety::*;
pub use sandbox::*;
pub use services::*;
pub use session_services::*;
pub use session_state::*;
pub use shell::*;
pub use token_data::*;
pub use tooling::*;
pub use truncate::*;
pub use turn_diff_tracker::*;
pub use unified_exec::*;

View File

@@ -1,15 +0,0 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::tooling::ApplyPatchToolType;
/// Metadata describing consistent behaviour across a family of models.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
pub slug: String,
pub family: String,
pub needs_special_apply_patch_instructions: bool,
pub supports_reasoning_summaries: bool,
pub reasoning_summary_format: ReasoningSummaryFormat,
pub uses_local_shell_tool: bool,
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
pub base_instructions: String,
}

View File

@@ -1,54 +0,0 @@
use std::collections::HashMap;
use codex_protocol::mcp_protocol::AuthMode;
use serde::Deserialize;
use serde::Serialize;
/// Wire protocol variants supported by model providers.
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum WireApi {
Responses,
#[default]
Chat,
}
/// Serializable representation of a provider definition shared across hosts.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct ModelProviderInfo {
pub name: String,
pub base_url: Option<String>,
pub env_key: Option<String>,
pub env_key_instructions: Option<String>,
#[serde(default)]
pub wire_api: WireApi,
pub query_params: Option<HashMap<String, String>>,
pub http_headers: Option<HashMap<String, String>>,
pub env_http_headers: Option<HashMap<String, String>>,
pub request_max_retries: Option<u64>,
pub stream_max_retries: Option<u64>,
pub stream_idle_timeout_ms: Option<u64>,
#[serde(default)]
pub requires_openai_auth: bool,
}
impl ModelProviderInfo {
pub fn wire_api(&self) -> WireApi {
self.wire_api
}
pub fn requires_auth(&self) -> bool {
self.requires_openai_auth
}
pub fn base_url(&self, auth_mode: AuthMode) -> String {
let fallback = if auth_mode == AuthMode::ChatGPT {
"https://chatgpt.com/backend-api/codex"
} else {
"https://api.openai.com/v1"
};
self.base_url
.clone()
.unwrap_or_else(|| fallback.to_string())
}
}

View File

@@ -1,15 +0,0 @@
use serde::Serialize;
/// Cross-host notification payloads emitted by the agent runtime.
#[derive(Debug, Clone, PartialEq, Serialize)]
#[serde(tag = "type", rename_all = "kebab-case")]
pub enum UserNotification {
#[serde(rename_all = "kebab-case")]
AgentTurnComplete {
turn_id: String,
/// Messages submitted by the user to start the turn.
input_messages: Vec<String>,
/// Final assistant message emitted at turn completion.
last_assistant_message: Option<String>,
},
}

View File

@@ -1,11 +0,0 @@
pub const SESSIONS_SUBDIR: &str = "sessions";
pub const ARCHIVED_SESSIONS_SUBDIR: &str = "archived_sessions";
pub mod list;
pub mod policy;
pub mod recorder;
pub use recorder::GitInfoCollector;
pub use recorder::RolloutConfig;
pub use recorder::RolloutRecorder;
pub use recorder::RolloutRecorderParams;

View File

@@ -1,16 +0,0 @@
use async_trait::async_trait;
use codex_protocol::protocol::Event;
use codex_protocol::protocol::Op;
use codex_protocol::protocol::Submission;
/// Minimal async interface for interacting with an agent runtime.
#[async_trait]
pub trait AgentRuntime: Send + Sync {
type Error: std::error::Error + Send + Sync + 'static;
async fn submit(&self, op: Op) -> Result<String, Self::Error>;
async fn submit_with_id(&self, submission: Submission) -> Result<(), Self::Error>;
async fn next_event(&self) -> Result<Event, Self::Error>;
}

View File

@@ -1,46 +0,0 @@
use std::collections::HashMap;
use std::path::PathBuf;
use crate::config_types::History;
use crate::config_types::McpServerConfig;
use crate::config_types::ShellEnvironmentPolicy;
use crate::model_family::ModelFamily;
use crate::model_provider::ModelProviderInfo;
use codex_protocol::config_types::ReasoningEffort;
use codex_protocol::config_types::ReasoningSummary;
use codex_protocol::config_types::Verbosity;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
/// Configuration surface consumed by the agent runtime regardless of host.
#[derive(Debug, Clone, PartialEq)]
pub struct AgentConfig {
pub model: String,
pub review_model: String,
pub model_family: ModelFamily,
pub model_context_window: Option<u64>,
pub model_auto_compact_token_limit: Option<i64>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: ReasoningSummary,
pub model_verbosity: Option<Verbosity>,
pub model_provider: ModelProviderInfo,
pub approval_policy: AskForApproval,
pub sandbox_policy: SandboxPolicy,
pub shell_environment_policy: ShellEnvironmentPolicy,
pub user_instructions: Option<String>,
pub base_instructions: Option<String>,
pub notify: Option<Vec<String>>,
pub cwd: PathBuf,
pub codex_home: PathBuf,
pub history: History,
pub mcp_servers: HashMap<String, McpServerConfig>,
pub include_plan_tool: bool,
pub include_apply_patch_tool: bool,
pub include_view_image_tool: bool,
pub tools_web_search_request: bool,
pub use_experimental_streamable_shell_tool: bool,
pub use_experimental_unified_exec_tool: bool,
pub show_raw_agent_reasoning: bool,
pub codex_linux_sandbox_exe: Option<PathBuf>,
pub project_doc_max_bytes: usize,
}

View File

@@ -1,3 +0,0 @@
pub mod types;
pub use types::SandboxType;

View File

@@ -1,10 +0,0 @@
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum SandboxType {
None,
/// Only available on macOS.
MacosSeatbelt,
/// Only available on Linux.
LinuxSeccomp,
}

View File

@@ -1,138 +0,0 @@
use std::collections::HashMap;
use std::path::PathBuf;
use async_trait::async_trait;
use codex_apply_patch::ApplyPatchAction;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::RolloutItem;
use mcp_types::Tool;
use serde_json::Value;
use crate::exec_command::ExecCommandOutput;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::WriteStdinParams;
use crate::notifications::UserNotification;
use crate::rollout::RolloutRecorder;
use crate::token_data::PlanType;
use crate::unified_exec::UnifiedExecError;
use crate::unified_exec::UnifiedExecRequest;
use crate::unified_exec::UnifiedExecResult;
/// Authentication context made available to the provider layer.
#[async_trait]
pub trait ProviderAuth: Send + Sync {
fn mode(&self) -> AuthMode;
async fn access_token(&self) -> std::io::Result<String>;
fn account_id(&self) -> Option<String>;
fn plan_type(&self) -> Option<PlanType>;
}
/// Provides access to credentials required when talking to model providers.
#[async_trait]
pub trait CredentialsProvider: Send + Sync {
fn auth(&self) -> Option<std::sync::Arc<dyn ProviderAuth>>;
async fn refresh_token(&self) -> std::io::Result<Option<String>>;
}
/// Emits user-facing notifications for turn completion or other events.
pub trait Notifier: Send + Sync {
fn notify(&self, notification: &UserNotification);
}
/// Runtime callbacks for user approval workflows.
#[async_trait]
pub trait ApprovalCoordinator: Send + Sync {
async fn request_patch_approval(
&self,
sub_id: String,
call_id: String,
action: &ApplyPatchAction,
reason: Option<String>,
grant_root: Option<PathBuf>,
) -> ReviewDecision;
async fn request_command_approval(
&self,
sub_id: String,
call_id: String,
command: Vec<String>,
cwd: PathBuf,
reason: Option<String>,
) -> ReviewDecision;
async fn add_approved_command(&self, command: Vec<String>);
}
/// Aggregates and dispatches MCP tool calls across configured servers.
#[async_trait]
pub trait McpInterface: Send + Sync {
fn list_all_tools(&self) -> HashMap<String, Tool>;
fn parse_tool_name(&self, tool_name: &str) -> Option<(String, String)>;
async fn call_tool(
&self,
server: &str,
tool: &str,
arguments: Option<Value>,
) -> anyhow::Result<mcp_types::CallToolResult>;
}
/// Persists rollout events for later inspection or replay.
#[async_trait]
pub trait RolloutSink: Send + Sync {
async fn record_items(&self, items: &[RolloutItem]) -> std::io::Result<()>;
async fn flush(&self) -> std::io::Result<()>;
async fn shutdown(&self) -> std::io::Result<()>;
fn get_rollout_path(&self) -> PathBuf;
}
#[async_trait]
impl RolloutSink for RolloutRecorder {
async fn record_items(&self, items: &[RolloutItem]) -> std::io::Result<()> {
RolloutRecorder::record_items(self, items).await
}
async fn flush(&self) -> std::io::Result<()> {
RolloutRecorder::flush(self).await
}
async fn shutdown(&self) -> std::io::Result<()> {
RolloutRecorder::shutdown(self).await
}
fn get_rollout_path(&self) -> PathBuf {
RolloutRecorder::get_rollout_path(self)
}
}
/// Handles sandboxed exec orchestration, including long-running sessions.
#[async_trait]
pub trait SandboxManager: Send + Sync {
async fn handle_exec_command_request(
&self,
params: ExecCommandParams,
) -> Result<ExecCommandOutput, String>;
async fn handle_write_stdin_request(
&self,
params: WriteStdinParams,
) -> Result<ExecCommandOutput, String>;
async fn handle_unified_exec_request(
&self,
request: UnifiedExecRequest<'_>,
) -> Result<UnifiedExecResult, UnifiedExecError>;
fn codex_linux_sandbox_exe(&self) -> &Option<PathBuf>;
fn user_shell(&self) -> &crate::shell::Shell;
}

View File

@@ -1,18 +0,0 @@
use std::sync::Arc;
use tokio::sync::Mutex;
use crate::services::McpInterface;
use crate::services::Notifier;
use crate::services::RolloutSink;
use crate::services::SandboxManager;
/// Aggregated services that back a running agent session. Hosts provide
/// implementations for these traits and hand them to the runtime at spawn.
pub struct SessionServices {
pub mcp: Arc<dyn McpInterface>,
pub notifier: Arc<dyn Notifier>,
pub sandbox: Arc<dyn SandboxManager>,
pub rollout: Mutex<Option<Arc<dyn RolloutSink>>>,
pub show_raw_agent_reasoning: bool,
}

View File

@@ -1,271 +0,0 @@
use serde::Deserialize;
use serde::Serialize;
use shlex;
use std::path::PathBuf;
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct ZshShell {
pub(crate) shell_path: String,
pub(crate) zshrc_path: String,
}
impl ZshShell {
pub fn new(shell_path: impl Into<String>, zshrc_path: impl Into<String>) -> Self {
Self {
shell_path: shell_path.into(),
zshrc_path: zshrc_path.into(),
}
}
pub fn shell_path(&self) -> &str {
&self.shell_path
}
pub fn zshrc_path(&self) -> &str {
&self.zshrc_path
}
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct BashShell {
pub(crate) shell_path: String,
pub(crate) bashrc_path: String,
}
impl BashShell {
pub fn new(shell_path: impl Into<String>, bashrc_path: impl Into<String>) -> Self {
Self {
shell_path: shell_path.into(),
bashrc_path: bashrc_path.into(),
}
}
pub fn shell_path(&self) -> &str {
&self.shell_path
}
pub fn bashrc_path(&self) -> &str {
&self.bashrc_path
}
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct PowerShellConfig {
pub(crate) exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
pub(crate) bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
}
impl PowerShellConfig {
pub fn new(exe: impl Into<String>, bash_exe_fallback: Option<PathBuf>) -> Self {
Self {
exe: exe.into(),
bash_exe_fallback,
}
}
pub fn exe(&self) -> &str {
&self.exe
}
pub fn bash_exe_fallback(&self) -> Option<&PathBuf> {
self.bash_exe_fallback.as_ref()
}
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub enum Shell {
Zsh(ZshShell),
Bash(BashShell),
PowerShell(PowerShellConfig),
Unknown,
}
impl Shell {
pub fn format_default_shell_invocation(&self, command: Vec<String>) -> Option<Vec<String>> {
match self {
Shell::Zsh(zsh) => format_shell_invocation_with_rc(
command.as_slice(),
&zsh.shell_path,
&zsh.zshrc_path,
),
Shell::Bash(bash) => format_shell_invocation_with_rc(
command.as_slice(),
&bash.shell_path,
&bash.bashrc_path,
),
Shell::PowerShell(ps) => {
// If model generated a bash command, prefer a detected bash fallback
if let Some(script) = strip_bash_lc(command.as_slice()) {
return match &ps.bash_exe_fallback {
Some(bash) => Some(vec![
bash.to_string_lossy().to_string(),
"-lc".to_string(),
script,
]),
// No bash fallback → run the script under PowerShell.
// It will likely fail (except for some simple commands), but the error
// should give a clue to the model to fix upon retry that it's running under PowerShell.
None => Some(vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
script,
]),
};
}
// Not a bash command. If model did not generate a PowerShell command,
// turn it into a PowerShell command.
let first = command.first().map(String::as_str);
if first != Some(ps.exe.as_str()) {
// TODO (CODEX_2900): Handle escaping newlines.
if command.iter().any(|a| a.contains('\n') || a.contains('\r')) {
return Some(command);
}
let joined = shlex::try_join(command.iter().map(String::as_str)).ok();
return joined.map(|arg| {
vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
arg,
]
});
}
// Model generated a PowerShell command. Run it.
Some(command)
}
Shell::Unknown => None,
}
}
pub fn name(&self) -> Option<String> {
match self {
Shell::Zsh(zsh) => std::path::Path::new(&zsh.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::Bash(bash) => std::path::Path::new(&bash.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::PowerShell(ps) => Some(ps.exe.clone()),
Shell::Unknown => None,
}
}
}
fn format_shell_invocation_with_rc(
command: &[String],
shell_path: &str,
rc_path: &str,
) -> Option<Vec<String>> {
let joined = strip_bash_lc(command)
.or_else(|| shlex::try_join(command.iter().map(String::as_str)).ok())?;
let rc_command = if std::path::Path::new(rc_path).exists() {
format!("source {rc_path} && ({joined})")
} else {
joined
};
Some(vec![shell_path.to_string(), "-lc".to_string(), rc_command])
}
fn strip_bash_lc(command: &[String]) -> Option<String> {
match command {
// exactly three items
[first, second, third]
// first two must be "bash", "-lc"
if first == "bash" && second == "-lc" =>
{
Some(third.clone())
}
_ => None,
}
}
#[cfg(unix)]
fn detect_default_user_shell() -> Shell {
use libc::getpwuid;
use libc::getuid;
use std::ffi::CStr;
unsafe {
let uid = getuid();
let pw = getpwuid(uid);
if !pw.is_null() {
let shell_path = CStr::from_ptr((*pw).pw_shell)
.to_string_lossy()
.into_owned();
let home_path = CStr::from_ptr((*pw).pw_dir).to_string_lossy().into_owned();
if shell_path.ends_with("/zsh") {
return Shell::Zsh(ZshShell {
shell_path,
zshrc_path: format!("{home_path}/.zshrc"),
});
}
if shell_path.ends_with("/bash") {
return Shell::Bash(BashShell {
shell_path,
bashrc_path: format!("{home_path}/.bashrc"),
});
}
}
}
Shell::Unknown
}
#[cfg(unix)]
pub async fn default_user_shell() -> Shell {
detect_default_user_shell()
}
#[cfg(target_os = "windows")]
pub async fn default_user_shell() -> Shell {
use tokio::process::Command;
// Prefer PowerShell 7+ (`pwsh`) if available, otherwise fall back to Windows PowerShell.
let has_pwsh = Command::new("pwsh")
.arg("-NoLogo")
.arg("-NoProfile")
.arg("-Command")
.arg("$PSVersionTable.PSVersion.Major")
.output()
.await
.map(|o| o.status.success())
.unwrap_or(false);
let bash_exe = if Command::new("bash.exe")
.arg("--version")
.output()
.await
.ok()
.map(|o| o.status.success())
.unwrap_or(false)
{
which::which("bash.exe").ok()
} else {
None
};
if has_pwsh {
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: bash_exe,
})
} else {
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: bash_exe,
})
}
}
#[cfg(all(not(target_os = "windows"), not(unix)))]
pub async fn default_user_shell() -> Shell {
Shell::Unknown
}

View File

@@ -1,182 +0,0 @@
use base64::Engine;
use serde::Deserialize;
use serde::Serialize;
use thiserror::Error;
#[derive(Deserialize, Serialize, Clone, Debug, PartialEq, Default)]
pub struct TokenData {
/// Flat info parsed from the JWT in auth.json.
#[serde(
deserialize_with = "deserialize_id_token",
serialize_with = "serialize_id_token"
)]
pub id_token: IdTokenInfo,
/// This is a JWT.
pub access_token: String,
pub refresh_token: String,
pub account_id: Option<String>,
}
/// Flat subset of useful claims in id_token from auth.json.
#[derive(Debug, Clone, PartialEq, Eq, Default, Serialize, Deserialize)]
pub struct IdTokenInfo {
pub email: Option<String>,
/// The ChatGPT subscription plan type
/// (e.g., "free", "plus", "pro", "business", "enterprise", "edu").
/// (Note: values may vary by backend.)
pub chatgpt_plan_type: Option<PlanType>,
pub raw_jwt: String,
}
impl IdTokenInfo {
pub fn get_chatgpt_plan_type(&self) -> Option<String> {
self.chatgpt_plan_type.as_ref().map(|t| match t {
PlanType::Known(plan) => format!("{plan:?}"),
PlanType::Unknown(s) => s.clone(),
})
}
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(untagged)]
pub enum PlanType {
Known(KnownPlan),
Unknown(String),
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum KnownPlan {
Free,
Plus,
Pro,
Team,
Business,
Enterprise,
Edu,
}
#[derive(Deserialize)]
struct IdClaims {
#[serde(default)]
email: Option<String>,
#[serde(rename = "https://api.openai.com/auth", default)]
auth: Option<AuthClaims>,
}
#[derive(Deserialize)]
struct AuthClaims {
#[serde(default)]
chatgpt_plan_type: Option<PlanType>,
}
#[derive(Debug, Error)]
pub enum IdTokenInfoError {
#[error("invalid ID token format")]
InvalidFormat,
#[error(transparent)]
Base64(#[from] base64::DecodeError),
#[error(transparent)]
Json(#[from] serde_json::Error),
}
pub fn parse_id_token(id_token: &str) -> Result<IdTokenInfo, IdTokenInfoError> {
// JWT format: header.payload.signature
let mut parts = id_token.split('.');
let (_header_b64, payload_b64, _sig_b64) = match (parts.next(), parts.next(), parts.next()) {
(Some(h), Some(p), Some(s)) if !h.is_empty() && !p.is_empty() && !s.is_empty() => (h, p, s),
_ => return Err(IdTokenInfoError::InvalidFormat),
};
let payload_bytes = base64::engine::general_purpose::URL_SAFE_NO_PAD.decode(payload_b64)?;
let claims: IdClaims = serde_json::from_slice(&payload_bytes)?;
Ok(IdTokenInfo {
email: claims.email,
chatgpt_plan_type: claims.auth.and_then(|a| a.chatgpt_plan_type),
raw_jwt: id_token.to_string(),
})
}
fn deserialize_id_token<'de, D>(deserializer: D) -> Result<IdTokenInfo, D::Error>
where
D: serde::Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
parse_id_token(&s).map_err(serde::de::Error::custom)
}
fn serialize_id_token<S>(id_token: &IdTokenInfo, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(&id_token.raw_jwt)
}
#[cfg(test)]
mod tests {
use super::*;
use serde::Serialize;
#[test]
fn id_token_info_parses_email_and_plan() {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = serde_json::json!({
"email": "user@example.com",
"https://api.openai.com/auth": {
"chatgpt_plan_type": "pro"
}
});
fn b64url_no_pad(bytes: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes)
}
let header_b64 = b64url_no_pad(&serde_json::to_vec(&header).unwrap());
let payload_b64 = b64url_no_pad(&serde_json::to_vec(&payload).unwrap());
let signature_b64 = b64url_no_pad(b"sig");
let fake_jwt = format!("{header_b64}.{payload_b64}.{signature_b64}");
let info = parse_id_token(&fake_jwt).expect("should parse");
assert_eq!(info.email.as_deref(), Some("user@example.com"));
assert_eq!(info.get_chatgpt_plan_type().as_deref(), Some("Pro"));
}
#[test]
fn id_token_info_handles_missing_fields() {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = serde_json::json!({ "sub": "123" });
fn b64url_no_pad(bytes: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes)
}
let header_b64 = b64url_no_pad(&serde_json::to_vec(&header).unwrap());
let payload_b64 = b64url_no_pad(&serde_json::to_vec(&payload).unwrap());
let signature_b64 = b64url_no_pad(b"sig");
let fake_jwt = format!("{header_b64}.{payload_b64}.{signature_b64}");
let info = parse_id_token(&fake_jwt).expect("should parse");
assert!(info.email.is_none());
assert!(info.get_chatgpt_plan_type().is_none());
}
}

View File

@@ -1,10 +0,0 @@
use serde::Deserialize;
use serde::Serialize;
/// Represents which apply_patch tool variant a model expects.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum ApplyPatchToolType {
Freeform,
Function,
}

View File

@@ -1,180 +0,0 @@
//! Utilities for truncating large chunks of output while preserving a prefix
//! and suffix on UTF-8 boundaries.
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
pub fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1;
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1;
}
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
let mut guess_tokens = est_tokens;
for _ in 0..4 {
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::truncate_middle;
#[test]
fn truncate_middle_no_newlines_fallback() {
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ*";
let max_bytes = 32;
let (out, original) = truncate_middle(s, max_bytes);
assert!(out.starts_with("abc"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("XYZ*"));
assert_eq!(original, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
assert_eq!(s.len(), 80);
let max_bytes = 64;
let (out, tokens) = truncate_middle(&s, max_bytes);
assert!(out.starts_with("001\n002\n003\n004\n"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("017\n018\n019\n020\n"));
assert_eq!(tokens, Some(20));
}
#[test]
fn truncate_middle_handles_utf8_content() {
let s = "😀😀😀😀😀😀😀😀😀😀\nsecond line with ascii text\n";
let max_bytes = 32;
let (out, tokens) = truncate_middle(s, max_bytes);
assert!(out.contains("tokens truncated"));
assert!(!out.contains('\u{fffd}'));
assert_eq!(tokens, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries_2() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -1,896 +0,0 @@
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use std::process::Command;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use sha1::digest::Output;
use uuid::Uuid;
use codex_protocol::protocol::FileChange;
const ZERO_OID: &str = "0000000000000000000000000000000000000000";
const DEV_NULL: &str = "/dev/null";
struct BaselineFileInfo {
path: PathBuf,
content: Vec<u8>,
mode: FileMode,
oid: String,
}
/// Tracks sets of changes to files and exposes the overall unified diff.
/// Internally, the way this works is now:
/// 1. Maintain an in-memory baseline snapshot of files when they are first seen.
/// For new additions, do not create a baseline so that diffs are shown as proper additions (using /dev/null).
/// 2. Keep a stable internal filename (uuid) per external path for rename tracking.
/// 3. To compute the aggregated unified diff, compare each baseline snapshot to the current file on disk entirely in-memory
/// using the `similar` crate and emit unified diffs with rewritten external paths.
#[derive(Default)]
pub struct TurnDiffTracker {
/// Map external path -> internal filename (uuid).
external_to_temp_name: HashMap<PathBuf, String>,
/// Internal filename -> baseline file info.
baseline_file_info: HashMap<String, BaselineFileInfo>,
/// Internal filename -> external path as of current accumulated state (after applying all changes).
/// This is where renames are tracked.
temp_name_to_current_path: HashMap<String, PathBuf>,
/// Cache of known git worktree roots to avoid repeated filesystem walks.
git_root_cache: Vec<PathBuf>,
}
impl TurnDiffTracker {
pub fn new() -> Self {
Self::default()
}
/// Front-run apply patch calls to track the starting contents of any modified files.
/// - Creates an in-memory baseline snapshot for files that already exist on disk when first seen.
/// - For additions, we intentionally do not create a baseline snapshot so that diffs are proper additions.
/// - Also updates internal mappings for move/rename events.
pub fn on_patch_begin(&mut self, changes: &HashMap<PathBuf, FileChange>) {
for (path, change) in changes.iter() {
// Ensure a stable internal filename exists for this external path.
if !self.external_to_temp_name.contains_key(path.as_path()) {
let internal = Uuid::new_v4().to_string();
self.external_to_temp_name
.insert(path.clone(), internal.clone());
self.temp_name_to_current_path
.insert(internal.clone(), path.clone());
// If the file exists on disk now, snapshot as baseline; else leave missing to represent /dev/null.
let baseline_file_info = if path.exists() {
let mode = file_mode_for_path(path);
let mode_val = mode.unwrap_or(FileMode::Regular);
let content = blob_bytes(path, mode_val).unwrap_or_default();
let oid = if mode == Some(FileMode::Symlink) {
format!("{:x}", git_blob_sha1_hex_bytes(&content))
} else {
self.git_blob_oid_for_path(path)
.unwrap_or_else(|| format!("{:x}", git_blob_sha1_hex_bytes(&content)))
};
Some(BaselineFileInfo {
path: path.clone(),
content,
mode: mode_val,
oid,
})
} else {
Some(BaselineFileInfo {
path: path.clone(),
content: vec![],
mode: FileMode::Regular,
oid: ZERO_OID.to_string(),
})
};
if let Some(baseline_file_info) = baseline_file_info {
self.baseline_file_info
.insert(internal.clone(), baseline_file_info);
}
}
// Track rename/move in current mapping if provided in an Update.
if let FileChange::Update {
move_path: Some(dest),
..
} = change
{
let uuid_filename = match self.external_to_temp_name.get(path.as_path()) {
Some(i) => i.clone(),
None => {
// This should be rare, but if we haven't mapped the source, create it with no baseline.
let i = Uuid::new_v4().to_string();
self.baseline_file_info.insert(
i.clone(),
BaselineFileInfo {
path: path.clone(),
content: vec![],
mode: FileMode::Regular,
oid: ZERO_OID.to_string(),
},
);
i
}
};
// Update current external mapping for temp file name.
self.temp_name_to_current_path
.insert(uuid_filename.clone(), dest.clone());
// Update forward file_mapping: external current -> internal name.
self.external_to_temp_name.remove(path);
self.external_to_temp_name
.insert(dest.clone(), uuid_filename);
};
}
}
fn get_path_for_internal(&self, internal: &str) -> Option<PathBuf> {
self.temp_name_to_current_path
.get(internal)
.cloned()
.or_else(|| {
self.baseline_file_info
.get(internal)
.map(|info| info.path.clone())
})
}
/// Find the git worktree root for a file/directory by walking up to the first ancestor containing a `.git` entry.
/// Uses a simple cache of known roots and avoids negative-result caching for simplicity.
fn find_git_root_cached(&mut self, start: &Path) -> Option<PathBuf> {
let dir = if start.is_dir() {
start
} else {
start.parent()?
};
// Fast path: if any cached root is an ancestor of this path, use it.
if let Some(root) = self
.git_root_cache
.iter()
.find(|r| dir.starts_with(r))
.cloned()
{
return Some(root);
}
// Walk up to find a `.git` marker.
let mut cur = dir.to_path_buf();
loop {
let git_marker = cur.join(".git");
if git_marker.is_dir() || git_marker.is_file() {
if !self.git_root_cache.iter().any(|r| r == &cur) {
self.git_root_cache.push(cur.clone());
}
return Some(cur);
}
// On Windows, avoid walking above the drive or UNC share root.
#[cfg(windows)]
{
if is_windows_drive_or_unc_root(&cur) {
return None;
}
}
if let Some(parent) = cur.parent() {
cur = parent.to_path_buf();
} else {
return None;
}
}
}
/// Return a display string for `path` relative to its git root if found, else absolute.
fn relative_to_git_root_str(&mut self, path: &Path) -> String {
let s = if let Some(root) = self.find_git_root_cached(path) {
if let Ok(rel) = path.strip_prefix(&root) {
rel.display().to_string()
} else {
path.display().to_string()
}
} else {
path.display().to_string()
};
s.replace('\\', "/")
}
/// Ask git to compute the blob SHA-1 for the file at `path` within its repository.
/// Returns None if no repository is found or git invocation fails.
fn git_blob_oid_for_path(&mut self, path: &Path) -> Option<String> {
let root = self.find_git_root_cached(path)?;
// Compute a path relative to the repo root for better portability across platforms.
let rel = path.strip_prefix(&root).unwrap_or(path);
let output = Command::new("git")
.arg("-C")
.arg(&root)
.arg("hash-object")
.arg("--")
.arg(rel)
.output()
.ok()?;
if !output.status.success() {
return None;
}
let s = String::from_utf8_lossy(&output.stdout).trim().to_string();
if s.len() == 40 { Some(s) } else { None }
}
/// Recompute the aggregated unified diff by comparing all of the in-memory snapshots that were
/// collected before the first time they were touched by apply_patch during this turn with
/// the current repo state.
pub fn get_unified_diff(&mut self) -> Result<Option<String>> {
let mut aggregated = String::new();
// Compute diffs per tracked internal file in a stable order by external path.
let mut baseline_file_names: Vec<String> =
self.baseline_file_info.keys().cloned().collect();
// Sort lexicographically by full repo-relative path to match git behavior.
baseline_file_names.sort_by_key(|internal| {
self.get_path_for_internal(internal)
.map(|p| self.relative_to_git_root_str(&p))
.unwrap_or_default()
});
for internal in baseline_file_names {
aggregated.push_str(self.get_file_diff(&internal).as_str());
if !aggregated.ends_with('\n') {
aggregated.push('\n');
}
}
if aggregated.trim().is_empty() {
Ok(None)
} else {
Ok(Some(aggregated))
}
}
fn get_file_diff(&mut self, internal_file_name: &str) -> String {
let mut aggregated = String::new();
// Snapshot lightweight fields only.
let (baseline_external_path, baseline_mode, left_oid) = {
if let Some(info) = self.baseline_file_info.get(internal_file_name) {
(info.path.clone(), info.mode, info.oid.clone())
} else {
(PathBuf::new(), FileMode::Regular, ZERO_OID.to_string())
}
};
let current_external_path = match self.get_path_for_internal(internal_file_name) {
Some(p) => p,
None => return aggregated,
};
let current_mode = file_mode_for_path(&current_external_path).unwrap_or(FileMode::Regular);
let right_bytes = blob_bytes(&current_external_path, current_mode);
// Compute displays with &mut self before borrowing any baseline content.
let left_display = self.relative_to_git_root_str(&baseline_external_path);
let right_display = self.relative_to_git_root_str(&current_external_path);
// Compute right oid before borrowing baseline content.
let right_oid = if let Some(b) = right_bytes.as_ref() {
if current_mode == FileMode::Symlink {
format!("{:x}", git_blob_sha1_hex_bytes(b))
} else {
self.git_blob_oid_for_path(&current_external_path)
.unwrap_or_else(|| format!("{:x}", git_blob_sha1_hex_bytes(b)))
}
} else {
ZERO_OID.to_string()
};
// Borrow baseline content only after all &mut self uses are done.
let left_present = left_oid.as_str() != ZERO_OID;
let left_bytes: Option<&[u8]> = if left_present {
self.baseline_file_info
.get(internal_file_name)
.map(|i| i.content.as_slice())
} else {
None
};
// Fast path: identical bytes or both missing.
if left_bytes == right_bytes.as_deref() {
return aggregated;
}
aggregated.push_str(&format!("diff --git a/{left_display} b/{right_display}\n"));
let is_add = !left_present && right_bytes.is_some();
let is_delete = left_present && right_bytes.is_none();
if is_add {
aggregated.push_str(&format!("new file mode {current_mode}\n"));
} else if is_delete {
aggregated.push_str(&format!("deleted file mode {baseline_mode}\n"));
} else if baseline_mode != current_mode {
aggregated.push_str(&format!("old mode {baseline_mode}\n"));
aggregated.push_str(&format!("new mode {current_mode}\n"));
}
let left_text = left_bytes.and_then(|b| std::str::from_utf8(b).ok());
let right_text = right_bytes
.as_deref()
.and_then(|b| std::str::from_utf8(b).ok());
let can_text_diff = matches!(
(left_text, right_text, is_add, is_delete),
(Some(_), Some(_), _, _) | (_, Some(_), true, _) | (Some(_), _, _, true)
);
if can_text_diff {
let l = left_text.unwrap_or("");
let r = right_text.unwrap_or("");
aggregated.push_str(&format!("index {left_oid}..{right_oid}\n"));
let old_header = if left_present {
format!("a/{left_display}")
} else {
DEV_NULL.to_string()
};
let new_header = if right_bytes.is_some() {
format!("b/{right_display}")
} else {
DEV_NULL.to_string()
};
let diff = similar::TextDiff::from_lines(l, r);
let unified = diff
.unified_diff()
.context_radius(3)
.header(&old_header, &new_header)
.to_string();
aggregated.push_str(&unified);
} else {
aggregated.push_str(&format!("index {left_oid}..{right_oid}\n"));
let old_header = if left_present {
format!("a/{left_display}")
} else {
DEV_NULL.to_string()
};
let new_header = if right_bytes.is_some() {
format!("b/{right_display}")
} else {
DEV_NULL.to_string()
};
aggregated.push_str(&format!("--- {old_header}\n"));
aggregated.push_str(&format!("+++ {new_header}\n"));
aggregated.push_str("Binary files differ\n");
}
aggregated
}
}
/// Compute the Git SHA-1 blob object ID for the given content (bytes).
fn git_blob_sha1_hex_bytes(data: &[u8]) -> Output<sha1::Sha1> {
// Git blob hash is sha1 of: "blob <len>\0<data>"
let header = format!("blob {}\0", data.len());
use sha1::Digest;
let mut hasher = sha1::Sha1::new();
hasher.update(header.as_bytes());
hasher.update(data);
hasher.finalize()
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum FileMode {
Regular,
#[cfg(unix)]
Executable,
Symlink,
}
impl FileMode {
fn as_str(self) -> &'static str {
match self {
FileMode::Regular => "100644",
#[cfg(unix)]
FileMode::Executable => "100755",
FileMode::Symlink => "120000",
}
}
}
impl std::fmt::Display for FileMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}
#[cfg(unix)]
fn file_mode_for_path(path: &Path) -> Option<FileMode> {
use std::os::unix::fs::PermissionsExt;
let meta = fs::symlink_metadata(path).ok()?;
let ft = meta.file_type();
if ft.is_symlink() {
return Some(FileMode::Symlink);
}
let mode = meta.permissions().mode();
let is_exec = (mode & 0o111) != 0;
Some(if is_exec {
FileMode::Executable
} else {
FileMode::Regular
})
}
#[cfg(not(unix))]
fn file_mode_for_path(_path: &Path) -> Option<FileMode> {
// Default to non-executable on non-unix.
Some(FileMode::Regular)
}
fn blob_bytes(path: &Path, mode: FileMode) -> Option<Vec<u8>> {
if path.exists() {
let contents = if mode == FileMode::Symlink {
symlink_blob_bytes(path)
.ok_or_else(|| anyhow!("failed to read symlink target for {}", path.display()))
} else {
fs::read(path)
.with_context(|| format!("failed to read current file for diff {}", path.display()))
};
contents.ok()
} else {
None
}
}
#[cfg(unix)]
fn symlink_blob_bytes(path: &Path) -> Option<Vec<u8>> {
use std::os::unix::ffi::OsStrExt;
let target = std::fs::read_link(path).ok()?;
Some(target.as_os_str().as_bytes().to_vec())
}
#[cfg(not(unix))]
fn symlink_blob_bytes(_path: &Path) -> Option<Vec<u8>> {
None
}
#[cfg(windows)]
fn is_windows_drive_or_unc_root(p: &std::path::Path) -> bool {
use std::path::Component;
let mut comps = p.components();
matches!(
(comps.next(), comps.next(), comps.next()),
(Some(Component::Prefix(_)), Some(Component::RootDir), None)
)
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
/// Compute the Git SHA-1 blob object ID for the given content (string).
/// This delegates to the bytes version to avoid UTF-8 lossy conversions here.
fn git_blob_sha1_hex(data: &str) -> String {
format!("{:x}", git_blob_sha1_hex_bytes(data.as_bytes()))
}
fn normalize_diff_for_test(input: &str, root: &Path) -> String {
let root_str = root.display().to_string().replace('\\', "/");
let replaced = input.replace(&root_str, "<TMP>");
// Split into blocks on lines starting with "diff --git ", sort blocks for determinism, and rejoin
let mut blocks: Vec<String> = Vec::new();
let mut current = String::new();
for line in replaced.lines() {
if line.starts_with("diff --git ") && !current.is_empty() {
blocks.push(current);
current = String::new();
}
if !current.is_empty() {
current.push('\n');
}
current.push_str(line);
}
if !current.is_empty() {
blocks.push(current);
}
blocks.sort();
let mut out = blocks.join("\n");
if !out.ends_with('\n') {
out.push('\n');
}
out
}
#[test]
fn accumulates_add_and_update() {
let mut acc = TurnDiffTracker::new();
let dir = tempdir().unwrap();
let file = dir.path().join("a.txt");
// First patch: add file (baseline should be /dev/null).
let add_changes = HashMap::from([(
file.clone(),
FileChange::Add {
content: "foo\n".to_string(),
},
)]);
acc.on_patch_begin(&add_changes);
// Simulate apply: create the file on disk.
fs::write(&file, "foo\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/a.txt
@@ -0,0 +1 @@
+foo
"#,
)
};
assert_eq!(first, expected_first);
// Second patch: update the file on disk.
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Simulate apply: append a new line.
fs::write(&file, "foo\nbar\n").unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected_combined = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\nbar\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/a.txt
@@ -0,0 +1,2 @@
+foo
+bar
"#,
)
};
assert_eq!(combined, expected_combined);
}
#[test]
fn accumulates_delete() {
let dir = tempdir().unwrap();
let file = dir.path().join("b.txt");
fs::write(&file, "x\n").unwrap();
let mut acc = TurnDiffTracker::new();
let del_changes = HashMap::from([(
file.clone(),
FileChange::Delete {
content: "x\n".to_string(),
},
)]);
acc.on_patch_begin(&del_changes);
// Simulate apply: delete the file from disk.
let baseline_mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
fs::remove_file(&file).unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let left_oid = git_blob_sha1_hex("x\n");
format!(
r#"diff --git a/<TMP>/b.txt b/<TMP>/b.txt
deleted file mode {baseline_mode}
index {left_oid}..{ZERO_OID}
--- a/<TMP>/b.txt
+++ {DEV_NULL}
@@ -1 +0,0 @@
-x
"#,
)
};
assert_eq!(diff, expected);
}
#[test]
fn accumulates_move_and_update() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.txt");
let dest = dir.path().join("dst.txt");
fs::write(&src, "line\n").unwrap();
let mut acc = TurnDiffTracker::new();
let mv_changes = HashMap::from([(
src.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv_changes);
// Simulate apply: move and update content.
fs::rename(&src, &dest).unwrap();
fs::write(&dest, "line2\n").unwrap();
let out = acc.get_unified_diff().unwrap().unwrap();
let out = normalize_diff_for_test(&out, dir.path());
let expected = {
let left_oid = git_blob_sha1_hex("line\n");
let right_oid = git_blob_sha1_hex("line2\n");
format!(
r#"diff --git a/<TMP>/src.txt b/<TMP>/dst.txt
index {left_oid}..{right_oid}
--- a/<TMP>/src.txt
+++ b/<TMP>/dst.txt
@@ -1 +1 @@
-line
+line2
"#
)
};
assert_eq!(out, expected);
}
#[test]
fn move_without_1change_yields_no_diff() {
let dir = tempdir().unwrap();
let src = dir.path().join("moved.txt");
let dest = dir.path().join("renamed.txt");
fs::write(&src, "same\n").unwrap();
let mut acc = TurnDiffTracker::new();
let mv_changes = HashMap::from([(
src.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv_changes);
// Simulate apply: move only, no content change.
fs::rename(&src, &dest).unwrap();
let diff = acc.get_unified_diff().unwrap();
assert_eq!(diff, None);
}
#[test]
fn move_declared_but_file_only_appears_at_dest_is_add() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.txt");
let dest = dir.path().join("dest.txt");
let mut acc = TurnDiffTracker::new();
let mv = HashMap::from([(
src,
FileChange::Update {
unified_diff: "".into(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv);
// No file existed initially; create only dest
fs::write(&dest, "hello\n").unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let mode = file_mode_for_path(&dest).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("hello\n");
format!(
r#"diff --git a/<TMP>/src.txt b/<TMP>/dest.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/dest.txt
@@ -0,0 +1 @@
+hello
"#,
)
};
assert_eq!(diff, expected);
}
#[test]
fn update_persists_across_new_baseline_for_new_file() {
let dir = tempdir().unwrap();
let a = dir.path().join("a.txt");
let b = dir.path().join("b.txt");
fs::write(&a, "foo\n").unwrap();
fs::write(&b, "z\n").unwrap();
let mut acc = TurnDiffTracker::new();
// First: update existing a.txt (baseline snapshot is created for a).
let update_a = HashMap::from([(
a.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_a);
// Simulate apply: modify a.txt on disk.
fs::write(&a, "foo\nbar\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let left_oid = git_blob_sha1_hex("foo\n");
let right_oid = git_blob_sha1_hex("foo\nbar\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
index {left_oid}..{right_oid}
--- a/<TMP>/a.txt
+++ b/<TMP>/a.txt
@@ -1 +1,2 @@
foo
+bar
"#
)
};
assert_eq!(first, expected_first);
// Next: introduce a brand-new path b.txt into baseline snapshots via a delete change.
let del_b = HashMap::from([(
b.clone(),
FileChange::Delete {
content: "z\n".to_string(),
},
)]);
acc.on_patch_begin(&del_b);
// Simulate apply: delete b.txt.
let baseline_mode = file_mode_for_path(&b).unwrap_or(FileMode::Regular);
fs::remove_file(&b).unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected = {
let left_oid_a = git_blob_sha1_hex("foo\n");
let right_oid_a = git_blob_sha1_hex("foo\nbar\n");
let left_oid_b = git_blob_sha1_hex("z\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
index {left_oid_a}..{right_oid_a}
--- a/<TMP>/a.txt
+++ b/<TMP>/a.txt
@@ -1 +1,2 @@
foo
+bar
diff --git a/<TMP>/b.txt b/<TMP>/b.txt
deleted file mode {baseline_mode}
index {left_oid_b}..{ZERO_OID}
--- a/<TMP>/b.txt
+++ {DEV_NULL}
@@ -1 +0,0 @@
-z
"#,
)
};
assert_eq!(combined, expected);
}
#[test]
fn binary_files_differ_update() {
let dir = tempdir().unwrap();
let file = dir.path().join("bin.dat");
// Initial non-UTF8 bytes
let left_bytes: Vec<u8> = vec![0xff, 0xfe, 0xfd, 0x00];
// Updated non-UTF8 bytes
let right_bytes: Vec<u8> = vec![0x01, 0x02, 0x03, 0x00];
fs::write(&file, &left_bytes).unwrap();
let mut acc = TurnDiffTracker::new();
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Apply update on disk
fs::write(&file, &right_bytes).unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let left_oid = format!("{:x}", git_blob_sha1_hex_bytes(&left_bytes));
let right_oid = format!("{:x}", git_blob_sha1_hex_bytes(&right_bytes));
format!(
r#"diff --git a/<TMP>/bin.dat b/<TMP>/bin.dat
index {left_oid}..{right_oid}
--- a/<TMP>/bin.dat
+++ b/<TMP>/bin.dat
Binary files differ
"#
)
};
assert_eq!(diff, expected);
}
#[test]
fn filenames_with_spaces_add_and_update() {
let mut acc = TurnDiffTracker::new();
let dir = tempdir().unwrap();
let file = dir.path().join("name with spaces.txt");
// First patch: add file (baseline should be /dev/null).
let add_changes = HashMap::from([(
file.clone(),
FileChange::Add {
content: "foo\n".to_string(),
},
)]);
acc.on_patch_begin(&add_changes);
// Simulate apply: create the file on disk.
fs::write(&file, "foo\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\n");
format!(
r#"diff --git a/<TMP>/name with spaces.txt b/<TMP>/name with spaces.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/name with spaces.txt
@@ -0,0 +1 @@
+foo
"#,
)
};
assert_eq!(first, expected_first);
// Second patch: update the file on disk.
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Simulate apply: append a new line with a space.
fs::write(&file, "foo\nbar baz\n").unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected_combined = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\nbar baz\n");
format!(
r#"diff --git a/<TMP>/name with spaces.txt b/<TMP>/name with spaces.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/name with spaces.txt
@@ -0,0 +1,2 @@
+foo
+bar baz
"#,
)
};
assert_eq!(combined, expected_combined);
}
}

View File

@@ -1,112 +0,0 @@
# Agent Runtime Refactor
## Goals
- Decouple the Codex agent loop from CLI-specific wiring so it can run as a reusable library or standalone binary.
- Preserve the current behaviour of `codex-core` (tooling, approvals, sandboxing, MCP integration) while providing a cleaner embedding surface.
- Enable specialised hosts—CLI, training harnesses, response API bridges—to share the same runtime with minimal glue code.
## Proposed Architecture
### 1. `codex-agent` crate (new)
- Owns the session runtime: `AgentRuntime`, `AgentHandle`, the states, and the task runners now under `core/src/tasks`.
- Exposes a queue-like API: `AgentHandle::submit(Op/Submission)` and `AgentHandle::next_event()` mirroring todays behaviour.
- Re-exports protocol types from `codex-protocol` so consumers do not depend on the entire `codex-core` tree.
- Houses the agent loop (`run_task`, `run_turn`, exec/safety plumbing) together with the sandbox planner (`ExecPlan`, `PreparedExec`, etc.).
### 2. Shared configuration surface
- Introduce `AgentConfig` as the minimal runtime configuration (model, provider, approvals, sandbox defaults, cwd, user/base instructions, feature flags relevant to the loop).
- Provide `From<&Config>` for CLI compatibility; training/other hosts construct `AgentConfig` directly.
- CLI-only concerns (logging, auth prompts, workspace presets) stay inside `codex-core` and are translated before spawning the runtime.
### 3. Service abstraction layer
- Define traits that the runtime depends on instead of concrete CLI structs:
- `CredentialsProvider` (wraps `AuthManager`).
- `Notifier` (reuses `UserNotifier` contract).
- `McpInterface` (start/list tools, dispatch tool calls).
- `SandboxManager` (wraps `BackendRegistry`/`prepare_exec_invocation` wiring).
- `RolloutSink` (write/flush rollout items; default no-op).
- Provide default implementations in `codex-core` that simply wrap the existing services (`SessionServices`).
### 4. Task subsystem consolidation
- Keep the new `SessionTask` trait and concrete tasks (`RegularTask`, `ReviewTask`, `CompactTask`) inside `codex-agent` so custom hosts can opt into additional tasks without touching CLI crates.
- Ensure task lifecycle management (`spawn_task`, `abort_all_tasks`, `ActiveTurn`) stays encapsulated in the runtime and surfaces only high-level signals (events, cancellation APIs).
### 5. Sandbox execution layer
- Move the recently created `core/src/sandbox` module into `codex-agent` (or re-export) so runtime owns exec planning.
- Runtime exposes an injectable `SandboxRuntimeConfig` (paths, seatbelt binary, stdout streaming choice) and calls into `SandboxManager` to execute plans.
- Respect existing environment variables and approval policies; no semantic changes to seatbelt handling.
### 6. Host integrations
- CLI crate: replaces direct usage of `Codex::spawn` with `AgentRuntime::spawn`, adapting CLI config/auth providers to runtime traits. Behaviour remains identical.
- Training binary (`codex-agent-bin`): thin crate that parses CLI flags (Response API URL, auth token, optional instructions) and bridges remote Ops/Events to the runtime via chosen transport (MCP channel, HTTP/WebSocket bridge).
- Additional hosts can embed the runtime by implementing the service traits and providing transport glue.
### 7. Transport adapters
- Internally keep `async_channel` for runtime queues.
- Provide helper adapters (`AgentTransport` trait) so callers can hook streams (local channel, TCP bridge, etc.) while keeping backpressure and graceful shutdown semantics consistent.
## Guidelines
- **Config boundary**: new code must depend on `AgentConfig`; only CLI/front-ends may use the broader `Config` struct. Avoid adding CLI-specific fields to the runtime config.
- **Trait-based services**: any runtime dependency that could vary across hosts (MCP, rollout persistence, sandbox execution, notifications) should be expressed as a trait with a default implementation living in `codex-core`.
- **Task authoring**: additional tasks must implement `SessionTask`; tasks are responsible for calling `run_task`/`exit_review_mode` helpers and returning final assistant output for `TaskComplete` events.
- **Sandbox safety**: all exec/patch calls must flow through `plan_exec`/`plan_apply_patch` (now under `codex-agent::sandbox`) to preserve approval semantics. Never bypass `SandboxManager`.
- **MCP usage**: runtime talks only through `McpInterface`; hosts provide concrete connectors (existing CLI manager, lightweight training stub, etc.).
- **Rollout handling**: default `RolloutSink` should no-op; hosts that require persistence (CLI, evaluation harness) supply an implementation that wraps existing recorder.
- **Transport/backpressure**: treat the runtime queue as bounded and handle cancellations; adapters must propagate `Op::Shutdown` promptly.
- **Observability**: keep tracing instrumentation intact; new modules should use existing `tracing` spans for start/end of tasks, exec calls, and MCP interactions.
- **Code quality**: write minimalist idiomatic code. Leverage the capacity of Rust
## Current Scope Snapshot
- `codex-agent` owns the execution/runtime surface: conversation history, rollout recording, function tool plumbing, sandbox planning, command/apply_patch safety, and the new `ApprovalCoordinator` trait that abstracts user approvals. Host-agnostic helpers such as shell formatting, bash parsing, and command safety now live here.
- `codex-core` focuses on CLI integration: loading user configuration, wiring concrete services (auth, MCP, sandbox manager), translating CLI policies into runtime configs, and exposing the embedded runtime to front-ends. It re-exports runtime modules needed by existing callers but should avoid hosting new agent logic.
- Session bootstrap now flows through a host-provided `prepare_session_bootstrap` helper: the CLI constructs rollout/MCP/sandbox services, builds the new `codex_agent::SessionServices` + `SessionState`, pre-builds the initial `TurnContext` (model client + tool config), and hands them to `Session::new` instead of constructing them inline.
## Implementation Plan
1. **Baseline & documentation**
- Capture current interfaces (`Codex`, `Session`, `SessionTask`) and update developer docs to reference this refactor plan.
- Add smoke tests covering multi-task scenarios (regular + review + compact) to guard against regressions during extraction.
2. **Introduce `AgentConfig`**
- Define struct + conversion helpers inside `codex-core`.
- Refactor internal `Session::new` / `TurnContext` builders to accept `AgentConfig` without changing external behaviour.
3. **Service trait extraction**
- Carve out trait definitions (`CredentialsProvider`, `McpInterface`, `SandboxManager`, `RolloutSink`, `Notifier`).
- Provide adapters backed by existing `SessionServices`.
- Update `Session` and helper modules to depend on traits rather than concrete structs.
4. **Create `codex-agent` crate**
- Scaffold crate, move runtime modules (`codex.rs`, `state`, `tasks`, `sandbox`) while keeping module paths stable via `pub use` re-exports.
- Resolve module imports to reference trait abstractions / helper crates (e.g., `codex_protocol`, `codex-apply-patch`).
- Ensure crate exposes `AgentRuntime`, `AgentHandle`, and service traits.
5. **Adapt `codex-core`**
- Replace `Codex::spawn` with thin wrapper that constructs `AgentConfig`, runtime service adapters, and delegates to `codex-agent`.
- Update public API to re-export runtime types if downstream crates expect them.
- Confirm unit tests continue to pass.
6. **Update front-ends**
- CLI crate: switch to new runtime API; verify login/auth flows, approvals, and sandbox invocations.
- Other binaries (`chatgpt`, etc.) migrate similarly, adjusting imports/config conversions.
7. **Add training binary**
- Implement new `codex-agent-bin` crate providing CLI for Response API URL + auth.
- Reuse existing MCP client logic where possible; otherwise, provide minimal HTTP bridge translating Ops/Events.
- Add integration tests using mocked Response API.
8. **Refine transport adapters**
- Add optional helper module offering channel/TCP/WebSocket adapters along with graceful shutdown behaviour.
- Document how hosts select or implement transports.
9. **Finalize rollout persistence strategy**
- Implement `RolloutSink` adapters (file-based, in-memory, disabled).
- Ensure CLI wires existing recorder; training binary can opt in/out via flags.
10. **Docs & polish**
- Update repository documentation (`README`, architecture docs) to reference the new crates and APIs.
- Record migration notes for downstream consumers.
- Run `just fmt`, scoped `just fix -p`, and targeted tests for touched crates before merging.
11. **Validation**
- Execute `cargo test -p codex-agent`, `cargo test -p codex-core`, and full suite (`cargo test --all-features`) once shared crates change.
- Perform manual verification: CLI session, review task, training binary against mock Response API, ensuring approvals and sandboxing behave identically.

View File

@@ -25,6 +25,7 @@ codex-core = { workspace = true }
codex-exec = { workspace = true }
codex-login = { workspace = true }
codex-mcp-server = { workspace = true }
codex-process-hardening = { workspace = true }
codex-protocol = { workspace = true }
codex-protocol-ts = { workspace = true }
codex-tui = { workspace = true }
@@ -42,15 +43,6 @@ tokio = { workspace = true, features = [
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
[target.'cfg(target_os = "linux")'.dependencies]
libc = { workspace = true }
[target.'cfg(target_os = "android")'.dependencies]
libc = { workspace = true }
[target.'cfg(target_os = "macos")'.dependencies]
libc = { workspace = true }
[dev-dependencies]
assert_cmd = { workspace = true }
predicates = { workspace = true }

View File

@@ -21,7 +21,6 @@ use std::path::PathBuf;
use supports_color::Stream;
mod mcp_cmd;
mod pre_main_hardening;
use crate::mcp_cmd::McpCli;
use crate::proto::ProtoCli;
@@ -182,7 +181,7 @@ fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<Stri
} else {
resume_cmd
};
lines.push(format!("To continue this session, run {command}."));
lines.push(format!("To continue this session, run {command}"));
}
lines
@@ -207,14 +206,7 @@ fn pre_main_hardening() {
};
if secure_mode == "1" {
#[cfg(any(target_os = "linux", target_os = "android"))]
crate::pre_main_hardening::pre_main_hardening_linux();
#[cfg(target_os = "macos")]
crate::pre_main_hardening::pre_main_hardening_macos();
#[cfg(windows)]
crate::pre_main_hardening::pre_main_hardening_windows();
codex_process_hardening::pre_main_hardening();
}
// Always clear this env var so child processes don't inherit it.
@@ -489,7 +481,7 @@ mod tests {
lines,
vec![
"Token usage: total=2 input=0 output=2".to_string(),
"To continue this session, run codex resume 123e4567-e89b-12d3-a456-426614174000."
"To continue this session, run codex resume 123e4567-e89b-12d3-a456-426614174000"
.to_string(),
]
);

View File

@@ -1,4 +1,3 @@
use std::collections::BTreeMap;
use std::collections::HashMap;
use std::path::PathBuf;
@@ -13,6 +12,7 @@ use codex_core::config::find_codex_home;
use codex_core::config::load_global_mcp_servers;
use codex_core::config::write_global_mcp_servers;
use codex_core::config_types::McpServerConfig;
use codex_core::config_types::McpServerTransportConfig;
/// [experimental] Launch Codex as an MCP server or manage configured MCP servers.
///
@@ -145,9 +145,11 @@ fn run_add(config_overrides: &CliConfigOverrides, add_args: AddArgs) -> Result<(
.with_context(|| format!("failed to load MCP servers from {}", codex_home.display()))?;
let new_entry = McpServerConfig {
command: command_bin,
args: command_args,
env: env_map,
transport: McpServerTransportConfig::Stdio {
command: command_bin,
args: command_args,
env: env_map,
},
startup_timeout_sec: None,
tool_timeout_sec: None,
};
@@ -201,16 +203,25 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
let json_entries: Vec<_> = entries
.into_iter()
.map(|(name, cfg)| {
let env = cfg.env.as_ref().map(|env| {
env.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<BTreeMap<_, _>>()
});
let transport = match &cfg.transport {
McpServerTransportConfig::Stdio { command, args, env } => serde_json::json!({
"type": "stdio",
"command": command,
"args": args,
"env": env,
}),
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
serde_json::json!({
"type": "streamable_http",
"url": url,
"bearer_token": bearer_token,
})
}
};
serde_json::json!({
"name": name,
"command": cfg.command,
"args": cfg.args,
"env": env,
"transport": transport,
"startup_timeout_sec": cfg
.startup_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
@@ -230,62 +241,111 @@ fn run_list(config_overrides: &CliConfigOverrides, list_args: ListArgs) -> Resul
return Ok(());
}
let mut rows: Vec<[String; 4]> = Vec::new();
let mut stdio_rows: Vec<[String; 4]> = Vec::new();
let mut http_rows: Vec<[String; 3]> = Vec::new();
for (name, cfg) in entries {
let args = if cfg.args.is_empty() {
"-".to_string()
} else {
cfg.args.join(" ")
};
let env = match cfg.env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
match &cfg.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
let args_display = if args.is_empty() {
"-".to_string()
} else {
args.join(" ")
};
let env_display = match env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
}
};
stdio_rows.push([name.clone(), command.clone(), args_display, env_display]);
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
let has_bearer = if bearer_token.is_some() {
"True"
} else {
"False"
};
http_rows.push([name.clone(), url.clone(), has_bearer.into()]);
}
};
rows.push([name.clone(), cfg.command.clone(), args, env]);
}
let mut widths = ["Name".len(), "Command".len(), "Args".len(), "Env".len()];
for row in &rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
}
}
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
"Name",
"Command",
"Args",
"Env",
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
);
if !stdio_rows.is_empty() {
let mut widths = ["Name".len(), "Command".len(), "Args".len(), "Env".len()];
for row in &stdio_rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
}
}
for row in rows {
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
row[0],
row[1],
row[2],
row[3],
"Name",
"Command",
"Args",
"Env",
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
);
for row in &stdio_rows {
println!(
"{:<name_w$} {:<cmd_w$} {:<args_w$} {:<env_w$}",
row[0],
row[1],
row[2],
row[3],
name_w = widths[0],
cmd_w = widths[1],
args_w = widths[2],
env_w = widths[3],
);
}
}
if !stdio_rows.is_empty() && !http_rows.is_empty() {
println!();
}
if !http_rows.is_empty() {
let mut widths = ["Name".len(), "Url".len(), "Has Bearer Token".len()];
for row in &http_rows {
for (i, cell) in row.iter().enumerate() {
widths[i] = widths[i].max(cell.len());
}
}
println!(
"{:<name_w$} {:<url_w$} {:<token_w$}",
"Name",
"Url",
"Has Bearer Token",
name_w = widths[0],
url_w = widths[1],
token_w = widths[2],
);
for row in &http_rows {
println!(
"{:<name_w$} {:<url_w$} {:<token_w$}",
row[0],
row[1],
row[2],
name_w = widths[0],
url_w = widths[1],
token_w = widths[2],
);
}
}
Ok(())
@@ -301,16 +361,22 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
};
if get_args.json {
let env = server.env.as_ref().map(|env| {
env.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect::<BTreeMap<_, _>>()
});
let transport = match &server.transport {
McpServerTransportConfig::Stdio { command, args, env } => serde_json::json!({
"type": "stdio",
"command": command,
"args": args,
"env": env,
}),
McpServerTransportConfig::StreamableHttp { url, bearer_token } => serde_json::json!({
"type": "streamable_http",
"url": url,
"bearer_token": bearer_token,
}),
};
let output = serde_json::to_string_pretty(&serde_json::json!({
"name": get_args.name,
"command": server.command,
"args": server.args,
"env": env,
"transport": transport,
"startup_timeout_sec": server
.startup_timeout_sec
.map(|timeout| timeout.as_secs_f64()),
@@ -323,27 +389,38 @@ fn run_get(config_overrides: &CliConfigOverrides, get_args: GetArgs) -> Result<(
}
println!("{}", get_args.name);
println!(" command: {}", server.command);
let args = if server.args.is_empty() {
"-".to_string()
} else {
server.args.join(" ")
};
println!(" args: {args}");
let env_display = match server.env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
match &server.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
println!(" transport: stdio");
println!(" command: {command}");
let args_display = if args.is_empty() {
"-".to_string()
} else {
args.join(" ")
};
println!(" args: {args_display}");
let env_display = match env.as_ref() {
None => "-".to_string(),
Some(map) if map.is_empty() => "-".to_string(),
Some(map) => {
let mut pairs: Vec<_> = map.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
pairs
.into_iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join(", ")
}
};
println!(" env: {env_display}");
}
};
println!(" env: {env_display}");
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
println!(" transport: streamable_http");
println!(" url: {url}");
let bearer = bearer_token.as_deref().unwrap_or("-");
println!(" bearer_token: {bearer}");
}
}
if let Some(timeout) = server.startup_timeout_sec {
println!(" startup_timeout_sec: {}", timeout.as_secs_f64());
}

View File

@@ -2,6 +2,7 @@ use std::path::Path;
use anyhow::Result;
use codex_core::config::load_global_mcp_servers;
use codex_core::config_types::McpServerTransportConfig;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use tempfile::TempDir;
@@ -26,9 +27,14 @@ fn add_and_remove_server_updates_global_config() -> Result<()> {
let servers = load_global_mcp_servers(codex_home.path())?;
assert_eq!(servers.len(), 1);
let docs = servers.get("docs").expect("server should exist");
assert_eq!(docs.command, "echo");
assert_eq!(docs.args, vec!["hello".to_string()]);
assert!(docs.env.is_none());
match &docs.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
assert_eq!(command, "echo");
assert_eq!(args, &vec!["hello".to_string()]);
assert!(env.is_none());
}
other => panic!("unexpected transport: {other:?}"),
}
let mut remove_cmd = codex_command(codex_home.path())?;
remove_cmd
@@ -76,7 +82,10 @@ fn add_with_env_preserves_key_order_and_values() -> Result<()> {
let servers = load_global_mcp_servers(codex_home.path())?;
let envy = servers.get("envy").expect("server should exist");
let env = envy.env.as_ref().expect("env should be present");
let env = match &envy.transport {
McpServerTransportConfig::Stdio { env: Some(env), .. } => env,
other => panic!("unexpected transport: {other:?}"),
};
assert_eq!(env.len(), 2);
assert_eq!(env.get("FOO"), Some(&"bar".to_string()));

View File

@@ -4,6 +4,7 @@ use anyhow::Result;
use predicates::str::contains;
use pretty_assertions::assert_eq;
use serde_json::Value as JsonValue;
use serde_json::json;
use tempfile::TempDir;
fn codex_command(codex_home: &Path) -> Result<assert_cmd::Command> {
@@ -58,38 +59,35 @@ fn list_and_get_render_expected_output() -> Result<()> {
assert!(json_output.status.success());
let stdout = String::from_utf8(json_output.stdout)?;
let parsed: JsonValue = serde_json::from_str(&stdout)?;
let array = parsed.as_array().expect("expected array");
assert_eq!(array.len(), 1);
let entry = &array[0];
assert_eq!(entry.get("name"), Some(&JsonValue::String("docs".into())));
assert_eq!(
entry.get("command"),
Some(&JsonValue::String("docs-server".into()))
);
let args = entry
.get("args")
.and_then(|v| v.as_array())
.expect("args array");
assert_eq!(
args,
&vec![
JsonValue::String("--port".into()),
JsonValue::String("4000".into())
parsed,
json!([
{
"name": "docs",
"transport": {
"type": "stdio",
"command": "docs-server",
"args": [
"--port",
"4000"
],
"env": {
"TOKEN": "secret"
}
},
"startup_timeout_sec": null,
"tool_timeout_sec": null
}
]
)
);
let env = entry
.get("env")
.and_then(|v| v.as_object())
.expect("env map");
assert_eq!(env.get("TOKEN"), Some(&JsonValue::String("secret".into())));
let mut get_cmd = codex_command(codex_home.path())?;
let get_output = get_cmd.args(["mcp", "get", "docs"]).output()?;
assert!(get_output.status.success());
let stdout = String::from_utf8(get_output.stdout)?;
assert!(stdout.contains("docs"));
assert!(stdout.contains("transport: stdio"));
assert!(stdout.contains("command: docs-server"));
assert!(stdout.contains("args: --port 4000"));
assert!(stdout.contains("env: TOKEN=secret"));

View File

@@ -29,7 +29,7 @@ const PRESETS: &[ModelPreset] = &[
label: "gpt-5-codex medium",
description: "",
model: "gpt-5-codex",
effort: None,
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-codex-high",

View File

@@ -20,9 +20,9 @@ base64 = { workspace = true }
bytes = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
codex-apply-patch = { workspace = true }
codex-agent = { workspace = true }
codex-file-search = { workspace = true }
codex-mcp-client = { workspace = true }
codex-rmcp-client = { workspace = true }
codex-protocol = { workspace = true }
dirs = { workspace = true }
env-flags = { workspace = true }
@@ -61,6 +61,8 @@ tokio-util = { workspace = true }
toml = { workspace = true }
toml_edit = { workspace = true }
tracing = { workspace = true, features = ["log"] }
tree-sitter = { workspace = true }
tree-sitter-bash = { workspace = true }
uuid = { workspace = true, features = ["serde", "v4"] }
which = { workspace = true }
wildmatch = { workspace = true }
@@ -81,6 +83,7 @@ openssl-sys = { workspace = true, features = ["vendored"] }
[dev-dependencies]
assert_cmd = { workspace = true }
core_test_support = { workspace = true }
escargot = { workspace = true }
maplit = { workspace = true }
predicates = { workspace = true }
pretty_assertions = { workspace = true }

View File

@@ -1,38 +0,0 @@
pub use codex_agent::AgentConfig;
use crate::config::Config;
impl From<&Config> for AgentConfig {
fn from(config: &Config) -> Self {
Self {
model: config.model.clone(),
review_model: config.review_model.clone(),
model_family: config.model_family.clone(),
model_context_window: config.model_context_window,
model_auto_compact_token_limit: config.model_auto_compact_token_limit,
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
model_verbosity: config.model_verbosity,
model_provider: config.model_provider.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
shell_environment_policy: config.shell_environment_policy.clone(),
user_instructions: config.user_instructions.clone(),
base_instructions: config.base_instructions.clone(),
notify: config.notify.clone(),
cwd: config.cwd.clone(),
codex_home: config.codex_home.clone(),
history: config.history.clone(),
mcp_servers: config.mcp_servers.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_view_image_tool: config.include_view_image_tool,
tools_web_search_request: config.tools_web_search_request,
use_experimental_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
use_experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
codex_linux_sandbox_exe: config.codex_linux_sandbox_exe.clone(),
project_doc_max_bytes: config.project_doc_max_bytes,
}
}
}

View File

@@ -1,148 +0,0 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::Result;
use async_trait::async_trait;
use codex_agent::notifications::UserNotification;
use codex_agent::services::CredentialsProvider;
use codex_agent::services::McpInterface;
use codex_agent::services::Notifier;
use codex_agent::services::ProviderAuth;
use codex_agent::services::SandboxManager;
use codex_agent::token_data::PlanType;
use codex_protocol::mcp_protocol::AuthMode;
use mcp_types::CallToolResult;
use mcp_types::Tool;
use serde_json::Value;
use crate::auth::AuthManager;
use crate::auth::CodexAuth;
use crate::exec_command::ExecCommandOutput;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::ExecSessionManager;
use crate::exec_command::WriteStdinParams;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::unified_exec::UnifiedExecError;
use crate::unified_exec::UnifiedExecRequest;
use crate::unified_exec::UnifiedExecResult;
use crate::unified_exec::UnifiedExecSessionManager;
use crate::user_notification::UserNotifier;
#[async_trait]
impl ProviderAuth for CodexAuth {
fn mode(&self) -> AuthMode {
self.mode
}
async fn access_token(&self) -> std::io::Result<String> {
self.get_token().await
}
fn account_id(&self) -> Option<String> {
self.get_account_id()
}
fn plan_type(&self) -> Option<PlanType> {
self.get_plan_type()
}
}
#[async_trait]
impl CredentialsProvider for AuthManager {
fn auth(&self) -> Option<Arc<dyn ProviderAuth>> {
AuthManager::auth(self).map(|auth| Arc::new(auth) as Arc<dyn ProviderAuth>)
}
async fn refresh_token(&self) -> std::io::Result<Option<String>> {
AuthManager::refresh_token(self).await
}
}
impl Notifier for UserNotifier {
fn notify(&self, notification: &UserNotification) {
UserNotifier::notify(self, notification);
}
}
#[async_trait]
impl McpInterface for McpConnectionManager {
fn list_all_tools(&self) -> HashMap<String, Tool> {
McpConnectionManager::list_all_tools(self)
}
fn parse_tool_name(&self, tool_name: &str) -> Option<(String, String)> {
McpConnectionManager::parse_tool_name(self, tool_name)
}
async fn call_tool(
&self,
server: &str,
tool: &str,
arguments: Option<Value>,
) -> Result<CallToolResult> {
McpConnectionManager::call_tool(self, server, tool, arguments).await
}
}
/// Default [`SandboxManager`] used by the CLI runtime. Wraps the existing exec
/// session managers and exposes their functionality via the trait-based
/// interface so other hosts can substitute different implementations.
pub struct DefaultSandboxManager {
exec_session_manager: ExecSessionManager,
unified_exec_manager: UnifiedExecSessionManager,
codex_linux_sandbox_exe: Option<PathBuf>,
user_shell: crate::shell::Shell,
}
impl DefaultSandboxManager {
pub fn new(
exec_session_manager: ExecSessionManager,
unified_exec_manager: UnifiedExecSessionManager,
codex_linux_sandbox_exe: Option<PathBuf>,
user_shell: crate::shell::Shell,
) -> Self {
Self {
exec_session_manager,
unified_exec_manager,
codex_linux_sandbox_exe,
user_shell,
}
}
}
#[async_trait]
impl SandboxManager for DefaultSandboxManager {
async fn handle_exec_command_request(
&self,
params: ExecCommandParams,
) -> Result<ExecCommandOutput, String> {
self.exec_session_manager
.handle_exec_command_request(params)
.await
}
async fn handle_write_stdin_request(
&self,
params: WriteStdinParams,
) -> Result<ExecCommandOutput, String> {
self.exec_session_manager
.handle_write_stdin_request(params)
.await
}
async fn handle_unified_exec_request(
&self,
request: UnifiedExecRequest<'_>,
) -> Result<UnifiedExecResult, UnifiedExecError> {
self.unified_exec_manager.handle_request(request).await
}
fn codex_linux_sandbox_exe(&self) -> &Option<PathBuf> {
&self.codex_linux_sandbox_exe
}
fn user_shell(&self) -> &crate::shell::Shell {
&self.user_shell
}
}

View File

@@ -1,22 +1,18 @@
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::ApplyPatchFileChange;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::FileChange;
use codex_protocol::protocol::ReviewDecision;
use codex_protocol::protocol::SandboxPolicy;
use crate::codex::Session;
use crate::codex::TurnContext;
use crate::function_tool::FunctionCallError;
use crate::protocol::FileChange;
use crate::protocol::ReviewDecision;
use crate::safety::SafetyCheck;
use crate::safety::assess_patch_safety;
use crate::services::ApprovalCoordinator;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::ApplyPatchFileChange;
use std::collections::HashMap;
use std::path::PathBuf;
pub const CODEX_APPLY_PATCH_ARG1: &str = "--codex-run-as-apply-patch";
pub enum InternalApplyPatchInvocation {
pub(crate) enum InternalApplyPatchInvocation {
/// The `apply_patch` call was handled programmatically, without any sort
/// of sandbox, because the user explicitly approved it. This is the
/// result to use with the `shell` function call that contained `apply_patch`.
@@ -31,30 +27,23 @@ pub enum InternalApplyPatchInvocation {
DelegateToExec(ApplyPatchExec),
}
#[derive(Debug)]
pub struct ApplyPatchExec {
pub action: ApplyPatchAction,
pub user_explicitly_approved_this_action: bool,
pub(crate) struct ApplyPatchExec {
pub(crate) action: ApplyPatchAction,
pub(crate) user_explicitly_approved_this_action: bool,
}
pub struct ApplyPatchContext<'a> {
pub approval_policy: AskForApproval,
pub sandbox_policy: &'a SandboxPolicy,
pub cwd: &'a Path,
}
pub async fn apply_patch(
approvals: &dyn ApprovalCoordinator,
context: ApplyPatchContext<'_>,
pub(crate) async fn apply_patch(
sess: &Session,
turn_context: &TurnContext,
sub_id: &str,
call_id: &str,
action: ApplyPatchAction,
) -> InternalApplyPatchInvocation {
match assess_patch_safety(
&action,
context.approval_policy,
context.sandbox_policy,
context.cwd,
turn_context.approval_policy,
&turn_context.sandbox_policy,
&turn_context.cwd,
) {
SafetyCheck::AutoApprove { .. } => {
InternalApplyPatchInvocation::DelegateToExec(ApplyPatchExec {
@@ -63,11 +52,17 @@ pub async fn apply_patch(
})
}
SafetyCheck::AskUser => {
let approval = approvals
// Compute a readable summary of path changes to include in the
// approval request so the user can make an informed decision.
//
// Note that it might be worth expanding this approval request to
// give the user the option to expand the set of writable roots so
// that similar patches can be auto-approved in the future during
// this session.
let rx_approve = sess
.request_patch_approval(sub_id.to_owned(), call_id.to_owned(), &action, None, None)
.await;
match approval {
match rx_approve.await.unwrap_or_default() {
ReviewDecision::Approved | ReviewDecision::ApprovedForSession => {
InternalApplyPatchInvocation::DelegateToExec(ApplyPatchExec {
action,
@@ -87,7 +82,9 @@ pub async fn apply_patch(
}
}
pub fn convert_apply_patch_to_protocol(action: &ApplyPatchAction) -> HashMap<PathBuf, FileChange> {
pub(crate) fn convert_apply_patch_to_protocol(
action: &ApplyPatchAction,
) -> HashMap<PathBuf, FileChange> {
let changes = action.changes();
let mut result = HashMap::with_capacity(changes.len());
for (path, change) in changes {

View File

@@ -135,7 +135,7 @@ impl CodexAuth {
self.get_current_token_data().and_then(|t| t.account_id)
}
pub fn get_plan_type(&self) -> Option<PlanType> {
pub(crate) fn get_plan_type(&self) -> Option<PlanType> {
self.get_current_token_data()
.and_then(|t| t.id_token.chatgpt_plan_type)
}

View File

@@ -22,7 +22,6 @@ use crate::client_common::ResponseStream;
use crate::error::CodexErr;
use crate::error::Result;
use crate::model_family::ModelFamily;
use crate::model_provider_info::ModelProviderExt;
use crate::openai_tools::create_tools_json_for_chat_completions_api;
use crate::util::backoff;
use codex_protocol::models::ContentItem;

View File

@@ -1,10 +1,10 @@
use std::fmt;
use std::io::BufRead;
use std::path::Path;
use std::sync::OnceLock;
use std::time::Duration;
use crate::CredentialsProvider;
use crate::AuthManager;
use crate::auth::CodexAuth;
use bytes::Bytes;
use codex_protocol::mcp_protocol::AuthMode;
use codex_protocol::mcp_protocol::ConversationId;
@@ -23,9 +23,6 @@ use tracing::debug;
use tracing::trace;
use tracing::warn;
use crate::ModelProviderInfo;
use crate::WireApi;
use crate::agent_config::AgentConfig;
use crate::chat_completions::AggregateStreamExt;
use crate::chat_completions::stream_chat_completions;
use crate::client_common::Prompt;
@@ -34,13 +31,15 @@ use crate::client_common::ResponseStream;
use crate::client_common::ResponsesApiRequest;
use crate::client_common::create_reasoning_param_for_request;
use crate::client_common::create_text_param_for_request;
use crate::config::Config;
use crate::default_client::create_client;
use crate::error::CodexErr;
use crate::error::Result;
use crate::error::UsageLimitReachedError;
use crate::flags::CODEX_RS_SSE_FIXTURE;
use crate::model_family::ModelFamily;
use crate::model_provider_info::ModelProviderExt;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::WireApi;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::create_tools_json_for_responses_api;
use crate::protocol::RateLimitSnapshot;
@@ -70,10 +69,10 @@ struct Error {
resets_in_seconds: Option<u64>,
}
#[derive(Clone)]
#[derive(Debug, Clone)]
pub struct ModelClient {
config: Arc<AgentConfig>,
auth_manager: Option<Arc<dyn CredentialsProvider>>,
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
client: reqwest::Client,
provider: ModelProviderInfo,
conversation_id: ConversationId,
@@ -81,22 +80,10 @@ pub struct ModelClient {
summary: ReasoningSummaryConfig,
}
impl fmt::Debug for ModelClient {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("ModelClient")
.field("config", &self.config)
.field("provider", &self.provider)
.field("conversation_id", &self.conversation_id)
.field("effort", &self.effort)
.field("summary", &self.summary)
.finish()
}
}
impl ModelClient {
pub fn new(
config: Arc<AgentConfig>,
auth_manager: Option<Arc<dyn CredentialsProvider>>,
config: Arc<Config>,
auth_manager: Option<Arc<AuthManager>>,
provider: ModelProviderInfo,
effort: Option<ReasoningEffortConfig>,
summary: ReasoningSummaryConfig,
@@ -272,7 +259,7 @@ impl ModelClient {
async fn attempt_stream_responses(
&self,
payload_json: &Value,
auth_manager: &Option<Arc<dyn CredentialsProvider>>,
auth_manager: &Option<Arc<AuthManager>>,
) -> std::result::Result<ResponseStream, StreamAttemptError> {
// Always fetch the latest auth in case a prior attempt refreshed the token.
let auth = auth_manager.as_ref().and_then(|m| m.auth());
@@ -298,8 +285,8 @@ impl ModelClient {
.json(payload_json);
if let Some(auth) = auth.as_ref()
&& auth.mode() == AuthMode::ChatGPT
&& let Some(account_id) = auth.account_id()
&& auth.mode == AuthMode::ChatGPT
&& let Some(account_id) = auth.get_account_id()
{
req_builder = req_builder.header("chatgpt-account-id", account_id);
}
@@ -385,7 +372,7 @@ impl ModelClient {
// token.
let plan_type = error
.plan_type
.or_else(|| auth.as_ref().and_then(|a| a.plan_type()));
.or_else(|| auth.as_ref().and_then(CodexAuth::get_plan_type));
let resets_in_seconds = error.resets_in_seconds;
let codex_err = CodexErr::UsageLimitReached(UsageLimitReachedError {
plan_type,
@@ -432,7 +419,7 @@ impl ModelClient {
self.summary
}
pub fn get_auth_manager(&self) -> Option<Arc<dyn CredentialsProvider>> {
pub fn get_auth_manager(&self) -> Option<Arc<AuthManager>> {
self.auth_manager.clone()
}
}
@@ -572,10 +559,6 @@ fn parse_rate_limit_snapshot(headers: &HeaderMap) -> Option<RateLimitSnapshot> {
"x-codex-secondary-reset-after-seconds",
);
if primary.is_none() && secondary.is_none() {
return None;
}
Some(RateLimitSnapshot { primary, secondary })
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,6 @@ use crate::Prompt;
use crate::client_common::ResponseEvent;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::model_provider_info::ModelProviderExt;
use crate::protocol::AgentMessageEvent;
use crate::protocol::CompactedItem;
use crate::protocol::ErrorEvent;
@@ -94,7 +93,8 @@ async fn run_compact_task_inner(
sess.persist_rollout_items(&[rollout_item]).await;
loop {
let attempt_result = drain_to_completed(&sess, turn_context.as_ref(), &prompt).await;
let attempt_result =
drain_to_completed(&sess, turn_context.as_ref(), &sub_id, &prompt).await;
match attempt_result {
Ok(()) => {
@@ -230,6 +230,7 @@ pub(crate) fn build_compacted_history(
async fn drain_to_completed(
sess: &Session,
turn_context: &TurnContext,
sub_id: &str,
prompt: &Prompt,
) -> CodexResult<()> {
let mut stream = turn_context.client.clone().stream(prompt).await?;
@@ -245,7 +246,12 @@ async fn drain_to_completed(
Ok(ResponseEvent::OutputItemDone(item)) => {
sess.record_into_history(std::slice::from_ref(&item)).await;
}
Ok(ResponseEvent::Completed { .. }) => {
Ok(ResponseEvent::RateLimits(snapshot)) => {
sess.update_rate_limits(sub_id, snapshot).await;
}
Ok(ResponseEvent::Completed { token_usage, .. }) => {
sess.update_token_usage_info(sub_id, turn_context, token_usage.as_ref())
.await;
return Ok(());
}
Ok(_) => continue,

View File

@@ -1,7 +1,7 @@
use crate::ModelProviderInfo;
use crate::config_profile::ConfigProfile;
use crate::config_types::History;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
use crate::config_types::Notifications;
use crate::config_types::ReasoningSummaryFormat;
use crate::config_types::SandboxWorkspaceWrite;
@@ -13,6 +13,7 @@ use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::built_in_model_providers;
use crate::openai_model_info::get_model_info;
use crate::protocol::AskForApproval;
@@ -184,8 +185,14 @@ pub struct Config {
/// If set to `true`, used only the experimental unified exec tool.
pub use_experimental_unified_exec_tool: bool,
/// If set to `true`, use the experimental official Rust MCP client.
/// https://github.com/modelcontextprotocol/rust-sdk
pub use_experimental_use_rmcp_client: bool,
/// Include the `view_image` tool that lets the agent attach a local image path to context.
pub include_view_image_tool: bool,
/// When true, sessions run inside a linked git worktree under `cwd/codex/<conversation>`.
pub enable_git_worktree: bool,
/// The active profile name used to derive this `Config` (if any).
pub active_profile: Option<String>,
@@ -310,27 +317,37 @@ pub fn write_global_mcp_servers(
for (name, config) in servers {
let mut entry = TomlTable::new();
entry.set_implicit(false);
entry["command"] = toml_edit::value(config.command.clone());
match &config.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
entry["command"] = toml_edit::value(command.clone());
if !config.args.is_empty() {
let mut args = TomlArray::new();
for arg in &config.args {
args.push(arg.clone());
}
entry["args"] = TomlItem::Value(args.into());
}
if !args.is_empty() {
let mut args_array = TomlArray::new();
for arg in args {
args_array.push(arg.clone());
}
entry["args"] = TomlItem::Value(args_array.into());
}
if let Some(env) = &config.env
&& !env.is_empty()
{
let mut env_table = TomlTable::new();
env_table.set_implicit(false);
let mut pairs: Vec<_> = env.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
for (key, value) in pairs {
env_table.insert(key, toml_edit::value(value.clone()));
if let Some(env) = env
&& !env.is_empty()
{
let mut env_table = TomlTable::new();
env_table.set_implicit(false);
let mut pairs: Vec<_> = env.iter().collect();
pairs.sort_by(|(a, _), (b, _)| a.cmp(b));
for (key, value) in pairs {
env_table.insert(key, toml_edit::value(value.clone()));
}
entry["env"] = TomlItem::Table(env_table);
}
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
entry["url"] = toml_edit::value(url.clone());
if let Some(token) = bearer_token {
entry["bearer_token"] = toml_edit::value(token.clone());
}
}
entry["env"] = TomlItem::Table(env_table);
}
if let Some(timeout) = config.startup_timeout_sec {
@@ -674,6 +691,9 @@ pub struct ConfigToml {
/// Defaults to `false`.
pub show_raw_agent_reasoning: Option<bool>,
/// Enable per-session git worktree checkouts.
pub enable_git_worktree: Option<bool>,
pub model_reasoning_effort: Option<ReasoningEffort>,
pub model_reasoning_summary: Option<ReasoningSummary>,
/// Optional verbosity control for GPT-5 models (Responses API `text.verbosity`).
@@ -693,6 +713,7 @@ pub struct ConfigToml {
pub experimental_use_exec_command_tool: Option<bool>,
pub experimental_use_unified_exec_tool: Option<bool>,
pub experimental_use_rmcp_client: Option<bool>,
pub projects: Option<HashMap<String, ProjectConfig>>,
@@ -843,6 +864,7 @@ pub struct ConfigOverrides {
pub include_view_image_tool: Option<bool>,
pub show_raw_agent_reasoning: Option<bool>,
pub tools_web_search_request: Option<bool>,
pub enable_git_worktree: Option<bool>,
}
impl Config {
@@ -871,6 +893,7 @@ impl Config {
include_view_image_tool,
show_raw_agent_reasoning,
tools_web_search_request: override_tools_web_search_request,
enable_git_worktree: override_enable_git_worktree,
} = overrides;
let active_profile_name = config_profile_key
@@ -944,6 +967,10 @@ impl Config {
.or(cfg.tools.as_ref().and_then(|t| t.view_image))
.unwrap_or(true);
let enable_git_worktree = override_enable_git_worktree
.or(cfg.enable_git_worktree)
.unwrap_or(false);
let model = model
.or(config_profile.model)
.or(cfg.model)
@@ -1043,7 +1070,9 @@ impl Config {
use_experimental_unified_exec_tool: cfg
.experimental_use_unified_exec_tool
.unwrap_or(false),
use_experimental_use_rmcp_client: cfg.experimental_use_rmcp_client.unwrap_or(false),
include_view_image_tool,
enable_git_worktree,
active_profile: active_profile_name,
disable_paste_burst: cfg.disable_paste_burst.unwrap_or(false),
tui_notifications: cfg
@@ -1288,9 +1317,11 @@ exclude_slash_tmp = true
servers.insert(
"docs".to_string(),
McpServerConfig {
command: "echo".to_string(),
args: vec!["hello".to_string()],
env: None,
transport: McpServerTransportConfig::Stdio {
command: "echo".to_string(),
args: vec!["hello".to_string()],
env: None,
},
startup_timeout_sec: Some(Duration::from_secs(3)),
tool_timeout_sec: Some(Duration::from_secs(5)),
},
@@ -1301,8 +1332,14 @@ exclude_slash_tmp = true
let loaded = load_global_mcp_servers(codex_home.path())?;
assert_eq!(loaded.len(), 1);
let docs = loaded.get("docs").expect("docs entry");
assert_eq!(docs.command, "echo");
assert_eq!(docs.args, vec!["hello".to_string()]);
match &docs.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
assert_eq!(command, "echo");
assert_eq!(args, &vec!["hello".to_string()]);
assert!(env.is_none());
}
other => panic!("unexpected transport {other:?}"),
}
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_secs(3)));
assert_eq!(docs.tool_timeout_sec, Some(Duration::from_secs(5)));
@@ -1336,6 +1373,134 @@ startup_timeout_ms = 2500
Ok(())
}
#[test]
fn write_global_mcp_servers_serializes_env_sorted() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let servers = BTreeMap::from([(
"docs".to_string(),
McpServerConfig {
transport: McpServerTransportConfig::Stdio {
command: "docs-server".to_string(),
args: vec!["--verbose".to_string()],
env: Some(HashMap::from([
("ZIG_VAR".to_string(), "3".to_string()),
("ALPHA_VAR".to_string(), "1".to_string()),
])),
},
startup_timeout_sec: None,
tool_timeout_sec: None,
},
)]);
write_global_mcp_servers(codex_home.path(), &servers)?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let serialized = std::fs::read_to_string(&config_path)?;
assert_eq!(
serialized,
r#"[mcp_servers.docs]
command = "docs-server"
args = ["--verbose"]
[mcp_servers.docs.env]
ALPHA_VAR = "1"
ZIG_VAR = "3"
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::Stdio { command, args, env } => {
assert_eq!(command, "docs-server");
assert_eq!(args, &vec!["--verbose".to_string()]);
let env = env
.as_ref()
.expect("env should be preserved for stdio transport");
assert_eq!(env.get("ALPHA_VAR"), Some(&"1".to_string()));
assert_eq!(env.get("ZIG_VAR"), Some(&"3".to_string()));
}
other => panic!("unexpected transport {other:?}"),
}
Ok(())
}
#[test]
fn write_global_mcp_servers_serializes_streamable_http() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
let mut servers = BTreeMap::from([(
"docs".to_string(),
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: Some("secret-token".to_string()),
},
startup_timeout_sec: Some(Duration::from_secs(2)),
tool_timeout_sec: None,
},
)]);
write_global_mcp_servers(codex_home.path(), &servers)?;
let config_path = codex_home.path().join(CONFIG_TOML_FILE);
let serialized = std::fs::read_to_string(&config_path)?;
assert_eq!(
serialized,
r#"[mcp_servers.docs]
url = "https://example.com/mcp"
bearer_token = "secret-token"
startup_timeout_sec = 2.0
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
assert_eq!(url, "https://example.com/mcp");
assert_eq!(bearer_token.as_deref(), Some("secret-token"));
}
other => panic!("unexpected transport {other:?}"),
}
assert_eq!(docs.startup_timeout_sec, Some(Duration::from_secs(2)));
servers.insert(
"docs".to_string(),
McpServerConfig {
transport: McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: None,
},
startup_timeout_sec: None,
tool_timeout_sec: None,
},
);
write_global_mcp_servers(codex_home.path(), &servers)?;
let serialized = std::fs::read_to_string(&config_path)?;
assert_eq!(
serialized,
r#"[mcp_servers.docs]
url = "https://example.com/mcp"
"#
);
let loaded = load_global_mcp_servers(codex_home.path())?;
let docs = loaded.get("docs").expect("docs entry");
match &docs.transport {
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
assert_eq!(url, "https://example.com/mcp");
assert!(bearer_token.is_none());
}
other => panic!("unexpected transport {other:?}"),
}
Ok(())
}
#[tokio::test]
async fn persist_model_selection_updates_defaults() -> anyhow::Result<()> {
let codex_home = TempDir::new()?;
@@ -1651,7 +1816,9 @@ model_verbosity = "high"
tools_web_search_request: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
enable_git_worktree: false,
active_profile: Some("o3".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
@@ -1709,7 +1876,9 @@ model_verbosity = "high"
tools_web_search_request: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
enable_git_worktree: false,
active_profile: Some("gpt3".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
@@ -1782,7 +1951,9 @@ model_verbosity = "high"
tools_web_search_request: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
enable_git_worktree: false,
active_profile: Some("zdr".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),
@@ -1841,7 +2012,9 @@ model_verbosity = "high"
tools_web_search_request: false,
use_experimental_streamable_shell_tool: false,
use_experimental_unified_exec_tool: false,
use_experimental_use_rmcp_client: false,
include_view_image_tool: true,
enable_git_worktree: false,
active_profile: Some("gpt5".to_string()),
disable_paste_burst: false,
tui_notifications: Default::default(),

View File

@@ -1,3 +1,505 @@
//! Re-exported configuration data structures now defined in `codex-agent`.
//! Types used to define the fields of [`crate::config::Config`].
pub use codex_agent::config_types::*;
// Note this file should generally be restricted to simple struct/enum
// definitions that do not contain business logic.
use serde::Deserializer;
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use wildmatch::WildMatchPattern;
use serde::Deserialize;
use serde::Serialize;
use serde::de::Error as SerdeError;
#[derive(Serialize, Debug, Clone, PartialEq)]
pub struct McpServerConfig {
#[serde(flatten)]
pub transport: McpServerTransportConfig,
/// Startup timeout in seconds for initializing MCP server & initially listing tools.
#[serde(
default,
with = "option_duration_secs",
skip_serializing_if = "Option::is_none"
)]
pub startup_timeout_sec: Option<Duration>,
/// Default timeout for MCP tool calls initiated via this server.
#[serde(default, with = "option_duration_secs")]
pub tool_timeout_sec: Option<Duration>,
}
impl<'de> Deserialize<'de> for McpServerConfig {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
struct RawMcpServerConfig {
command: Option<String>,
#[serde(default)]
args: Option<Vec<String>>,
#[serde(default)]
env: Option<HashMap<String, String>>,
url: Option<String>,
bearer_token: Option<String>,
#[serde(default)]
startup_timeout_sec: Option<f64>,
#[serde(default)]
startup_timeout_ms: Option<u64>,
#[serde(default, with = "option_duration_secs")]
tool_timeout_sec: Option<Duration>,
}
let raw = RawMcpServerConfig::deserialize(deserializer)?;
let startup_timeout_sec = match (raw.startup_timeout_sec, raw.startup_timeout_ms) {
(Some(sec), _) => {
let duration = Duration::try_from_secs_f64(sec).map_err(SerdeError::custom)?;
Some(duration)
}
(None, Some(ms)) => Some(Duration::from_millis(ms)),
(None, None) => None,
};
fn throw_if_set<E, T>(transport: &str, field: &str, value: Option<&T>) -> Result<(), E>
where
E: SerdeError,
{
if value.is_none() {
return Ok(());
}
Err(E::custom(format!(
"{field} is not supported for {transport}",
)))
}
let transport = match raw {
RawMcpServerConfig {
command: Some(command),
args,
env,
url,
bearer_token,
..
} => {
throw_if_set("stdio", "url", url.as_ref())?;
throw_if_set("stdio", "bearer_token", bearer_token.as_ref())?;
McpServerTransportConfig::Stdio {
command,
args: args.unwrap_or_default(),
env,
}
}
RawMcpServerConfig {
url: Some(url),
bearer_token,
command,
args,
env,
..
} => {
throw_if_set("streamable_http", "command", command.as_ref())?;
throw_if_set("streamable_http", "args", args.as_ref())?;
throw_if_set("streamable_http", "env", env.as_ref())?;
McpServerTransportConfig::StreamableHttp { url, bearer_token }
}
_ => return Err(SerdeError::custom("invalid transport")),
};
Ok(Self {
transport,
startup_timeout_sec,
tool_timeout_sec: raw.tool_timeout_sec,
})
}
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]
#[serde(untagged, deny_unknown_fields, rename_all = "snake_case")]
pub enum McpServerTransportConfig {
/// https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#stdio
Stdio {
command: String,
#[serde(default)]
args: Vec<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
env: Option<HashMap<String, String>>,
},
/// https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http
StreamableHttp {
url: String,
/// A plain text bearer token to use for authentication.
/// This bearer token will be included in the HTTP request header as an `Authorization: Bearer <token>` header.
/// This should be used with caution because it lives on disk in clear text.
#[serde(default, skip_serializing_if = "Option::is_none")]
bearer_token: Option<String>,
},
}
mod option_duration_secs {
use serde::Deserialize;
use serde::Deserializer;
use serde::Serializer;
use std::time::Duration;
pub fn serialize<S>(value: &Option<Duration>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match value {
Some(duration) => serializer.serialize_some(&duration.as_secs_f64()),
None => serializer.serialize_none(),
}
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Option<Duration>, D::Error>
where
D: Deserializer<'de>,
{
let secs = Option::<f64>::deserialize(deserializer)?;
secs.map(|secs| Duration::try_from_secs_f64(secs).map_err(serde::de::Error::custom))
.transpose()
}
}
#[derive(Deserialize, Debug, Copy, Clone, PartialEq)]
pub enum UriBasedFileOpener {
#[serde(rename = "vscode")]
VsCode,
#[serde(rename = "vscode-insiders")]
VsCodeInsiders,
#[serde(rename = "windsurf")]
Windsurf,
#[serde(rename = "cursor")]
Cursor,
/// Option to disable the URI-based file opener.
#[serde(rename = "none")]
None,
}
impl UriBasedFileOpener {
pub fn get_scheme(&self) -> Option<&str> {
match self {
UriBasedFileOpener::VsCode => Some("vscode"),
UriBasedFileOpener::VsCodeInsiders => Some("vscode-insiders"),
UriBasedFileOpener::Windsurf => Some("windsurf"),
UriBasedFileOpener::Cursor => Some("cursor"),
UriBasedFileOpener::None => None,
}
}
}
/// Settings that govern if and what will be written to `~/.codex/history.jsonl`.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct History {
/// If true, history entries will not be written to disk.
pub persistence: HistoryPersistence,
/// If set, the maximum size of the history file in bytes.
/// TODO(mbolin): Not currently honored.
pub max_bytes: Option<usize>,
}
#[derive(Deserialize, Debug, Copy, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum HistoryPersistence {
/// Save all history entries to disk.
#[default]
SaveAll,
/// Do not write history to disk.
None,
}
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
#[serde(untagged)]
pub enum Notifications {
Enabled(bool),
Custom(Vec<String>),
}
impl Default for Notifications {
fn default() -> Self {
Self::Enabled(false)
}
}
/// Collection of settings that are specific to the TUI.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct Tui {
/// Enable desktop notifications from the TUI when the terminal is unfocused.
/// Defaults to `false`.
#[serde(default)]
pub notifications: Notifications,
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct SandboxWorkspaceWrite {
#[serde(default)]
pub writable_roots: Vec<PathBuf>,
#[serde(default)]
pub network_access: bool,
#[serde(default)]
pub exclude_tmpdir_env_var: bool,
#[serde(default)]
pub exclude_slash_tmp: bool,
}
impl From<SandboxWorkspaceWrite> for codex_protocol::mcp_protocol::SandboxSettings {
fn from(sandbox_workspace_write: SandboxWorkspaceWrite) -> Self {
Self {
writable_roots: sandbox_workspace_write.writable_roots,
network_access: Some(sandbox_workspace_write.network_access),
exclude_tmpdir_env_var: Some(sandbox_workspace_write.exclude_tmpdir_env_var),
exclude_slash_tmp: Some(sandbox_workspace_write.exclude_slash_tmp),
}
}
}
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
#[serde(rename_all = "kebab-case")]
pub enum ShellEnvironmentPolicyInherit {
/// "Core" environment variables for the platform. On UNIX, this would
/// include HOME, LOGNAME, PATH, SHELL, and USER, among others.
Core,
/// Inherits the full environment from the parent process.
#[default]
All,
/// Do not inherit any environment variables from the parent process.
None,
}
/// Policy for building the `env` when spawning a process via either the
/// `shell` or `local_shell` tool.
#[derive(Deserialize, Debug, Clone, PartialEq, Default)]
pub struct ShellEnvironmentPolicyToml {
pub inherit: Option<ShellEnvironmentPolicyInherit>,
pub ignore_default_excludes: Option<bool>,
/// List of regular expressions.
pub exclude: Option<Vec<String>>,
pub r#set: Option<HashMap<String, String>>,
/// List of regular expressions.
pub include_only: Option<Vec<String>>,
pub experimental_use_profile: Option<bool>,
}
pub type EnvironmentVariablePattern = WildMatchPattern<'*', '?'>;
/// Deriving the `env` based on this policy works as follows:
/// 1. Create an initial map based on the `inherit` policy.
/// 2. If `ignore_default_excludes` is false, filter the map using the default
/// exclude pattern(s), which are: `"*KEY*"` and `"*TOKEN*"`.
/// 3. If `exclude` is not empty, filter the map using the provided patterns.
/// 4. Insert any entries from `r#set` into the map.
/// 5. If non-empty, filter the map using the `include_only` patterns.
#[derive(Debug, Clone, PartialEq, Default)]
pub struct ShellEnvironmentPolicy {
/// Starting point when building the environment.
pub inherit: ShellEnvironmentPolicyInherit,
/// True to skip the check to exclude default environment variables that
/// contain "KEY" or "TOKEN" in their name.
pub ignore_default_excludes: bool,
/// Environment variable names to exclude from the environment.
pub exclude: Vec<EnvironmentVariablePattern>,
/// (key, value) pairs to insert in the environment.
pub r#set: HashMap<String, String>,
/// Environment variable names to retain in the environment.
pub include_only: Vec<EnvironmentVariablePattern>,
/// If true, the shell profile will be used to run the command.
pub use_profile: bool,
}
impl From<ShellEnvironmentPolicyToml> for ShellEnvironmentPolicy {
fn from(toml: ShellEnvironmentPolicyToml) -> Self {
// Default to inheriting the full environment when not specified.
let inherit = toml.inherit.unwrap_or(ShellEnvironmentPolicyInherit::All);
let ignore_default_excludes = toml.ignore_default_excludes.unwrap_or(false);
let exclude = toml
.exclude
.unwrap_or_default()
.into_iter()
.map(|s| EnvironmentVariablePattern::new_case_insensitive(&s))
.collect();
let r#set = toml.r#set.unwrap_or_default();
let include_only = toml
.include_only
.unwrap_or_default()
.into_iter()
.map(|s| EnvironmentVariablePattern::new_case_insensitive(&s))
.collect();
let use_profile = toml.experimental_use_profile.unwrap_or(false);
Self {
inherit,
ignore_default_excludes,
exclude,
r#set,
include_only,
use_profile,
}
}
}
#[derive(Deserialize, Debug, Clone, PartialEq, Eq, Default, Hash)]
#[serde(rename_all = "kebab-case")]
pub enum ReasoningSummaryFormat {
#[default]
None,
Experimental,
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[test]
fn deserialize_stdio_command_server_config() {
let cfg: McpServerConfig = toml::from_str(
r#"
command = "echo"
"#,
)
.expect("should deserialize command config");
assert_eq!(
cfg.transport,
McpServerTransportConfig::Stdio {
command: "echo".to_string(),
args: vec![],
env: None
}
);
}
#[test]
fn deserialize_stdio_command_server_config_with_args() {
let cfg: McpServerConfig = toml::from_str(
r#"
command = "echo"
args = ["hello", "world"]
"#,
)
.expect("should deserialize command config");
assert_eq!(
cfg.transport,
McpServerTransportConfig::Stdio {
command: "echo".to_string(),
args: vec!["hello".to_string(), "world".to_string()],
env: None
}
);
}
#[test]
fn deserialize_stdio_command_server_config_with_arg_with_args_and_env() {
let cfg: McpServerConfig = toml::from_str(
r#"
command = "echo"
args = ["hello", "world"]
env = { "FOO" = "BAR" }
"#,
)
.expect("should deserialize command config");
assert_eq!(
cfg.transport,
McpServerTransportConfig::Stdio {
command: "echo".to_string(),
args: vec!["hello".to_string(), "world".to_string()],
env: Some(HashMap::from([("FOO".to_string(), "BAR".to_string())]))
}
);
}
#[test]
fn deserialize_streamable_http_server_config() {
let cfg: McpServerConfig = toml::from_str(
r#"
url = "https://example.com/mcp"
"#,
)
.expect("should deserialize http config");
assert_eq!(
cfg.transport,
McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: None
}
);
}
#[test]
fn deserialize_streamable_http_server_config_with_bearer_token() {
let cfg: McpServerConfig = toml::from_str(
r#"
url = "https://example.com/mcp"
bearer_token = "secret"
"#,
)
.expect("should deserialize http config");
assert_eq!(
cfg.transport,
McpServerTransportConfig::StreamableHttp {
url: "https://example.com/mcp".to_string(),
bearer_token: Some("secret".to_string())
}
);
}
#[test]
fn deserialize_rejects_command_and_url() {
toml::from_str::<McpServerConfig>(
r#"
command = "echo"
url = "https://example.com"
"#,
)
.expect_err("should reject command+url");
}
#[test]
fn deserialize_rejects_env_for_http_transport() {
toml::from_str::<McpServerConfig>(
r#"
url = "https://example.com"
env = { "FOO" = "BAR" }
"#,
)
.expect_err("should reject env for http transport");
}
#[test]
fn deserialize_rejects_bearer_token_for_stdio_transport() {
toml::from_str::<McpServerConfig>(
r#"
command = "echo"
bearer_token = "secret"
"#,
)
.expect_err("should reject bearer token for stdio transport");
}
}

View File

@@ -1 +1,120 @@
pub use codex_agent::ConversationHistory;
use codex_protocol::models::ResponseItem;
/// Transcript of conversation history
#[derive(Debug, Clone, Default)]
pub(crate) struct ConversationHistory {
/// The oldest items are at the beginning of the vector.
items: Vec<ResponseItem>,
}
impl ConversationHistory {
pub(crate) fn new() -> Self {
Self { items: Vec::new() }
}
/// Returns a clone of the contents in the transcript.
pub(crate) fn contents(&self) -> Vec<ResponseItem> {
self.items.clone()
}
/// `items` is ordered from oldest to newest.
pub(crate) fn record_items<I>(&mut self, items: I)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
{
for item in items {
if !is_api_message(&item) {
continue;
}
self.items.push(item.clone());
}
}
pub(crate) fn replace(&mut self, items: Vec<ResponseItem>) {
self.items = items;
}
}
/// Anything that is not a system message or "reasoning" message is considered
/// an API message.
fn is_api_message(message: &ResponseItem) -> bool {
match message {
ResponseItem::Message { role, .. } => role.as_str() != "system",
ResponseItem::FunctionCallOutput { .. }
| ResponseItem::FunctionCall { .. }
| ResponseItem::CustomToolCall { .. }
| ResponseItem::CustomToolCallOutput { .. }
| ResponseItem::LocalShellCall { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => true,
ResponseItem::Other => false,
}
}
#[cfg(test)]
mod tests {
use super::*;
use codex_protocol::models::ContentItem;
fn assistant_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
fn user_msg(text: &str) -> ResponseItem {
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: text.to_string(),
}],
}
}
#[test]
fn filters_non_api_messages() {
let mut h = ConversationHistory::default();
// System message is not an API message; Other is ignored.
let system = ResponseItem::Message {
id: None,
role: "system".to_string(),
content: vec![ContentItem::OutputText {
text: "ignored".to_string(),
}],
};
h.record_items([&system, &ResponseItem::Other]);
// User and assistant should be retained.
let u = user_msg("hi");
let a = assistant_msg("hello");
h.record_items([&u, &a]);
let items = h.contents();
assert_eq!(
items,
vec![
ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::OutputText {
text: "hi".to_string()
}]
},
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "hello".to_string()
}]
}
]
);
}
}

View File

@@ -308,16 +308,19 @@ mod tests {
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Bash(BashShell::new(
"/bin/bash",
"/home/user/.bashrc",
))),
Some(Shell::Bash(BashShell {
shell_path: "/bin/bash".into(),
bashrc_path: "/home/user/.bashrc".into(),
})),
);
let context2 = EnvironmentContext::new(
Some(PathBuf::from("/repo")),
Some(AskForApproval::OnRequest),
Some(workspace_write_policy(vec!["/repo"], false)),
Some(Shell::Zsh(ZshShell::new("/bin/zsh", "/home/user/.zshrc"))),
Some(Shell::Zsh(ZshShell {
shell_path: "/bin/zsh".into(),
zshrc_path: "/home/user/.zshrc".into(),
})),
);
assert!(context1.equals_except_shell(&context2));

View File

@@ -27,7 +27,6 @@ use crate::protocol::SandboxPolicy;
use crate::seatbelt::spawn_command_under_seatbelt;
use crate::spawn::StdioPolicy;
use crate::spawn::spawn_child_async;
pub use codex_agent::sandbox::SandboxType;
const DEFAULT_TIMEOUT_MS: u64 = 10_000;
@@ -62,6 +61,17 @@ impl ExecParams {
}
}
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum SandboxType {
None,
/// Only available on macOS.
MacosSeatbelt,
/// Only available on Linux.
LinuxSeccomp,
}
#[derive(Clone)]
pub struct StdoutStream {
pub sub_id: String,

View File

@@ -5,8 +5,7 @@ use tokio::sync::mpsc;
use tokio::task::JoinHandle;
#[derive(Debug)]
#[allow(dead_code)]
pub struct ExecCommandSession {
pub(crate) struct ExecCommandSession {
/// Queue for writing bytes to the process stdin (PTY master write side).
writer_tx: mpsc::Sender<Vec<u8>>,
/// Broadcast stream of output chunks read from the PTY. New subscribers
@@ -30,9 +29,8 @@ pub struct ExecCommandSession {
exit_status: std::sync::Arc<std::sync::atomic::AtomicBool>,
}
#[allow(dead_code)]
impl ExecCommandSession {
pub fn new(
pub(crate) fn new(
writer_tx: mpsc::Sender<Vec<u8>>,
output_tx: broadcast::Sender<Vec<u8>>,
killer: Box<dyn portable_pty::ChildKiller + Send + Sync>,
@@ -56,7 +54,7 @@ impl ExecCommandSession {
)
}
pub fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.writer_tx.clone()
}
@@ -64,7 +62,7 @@ impl ExecCommandSession {
self.output_tx.subscribe()
}
pub fn has_exited(&self) -> bool {
pub(crate) fn has_exited(&self) -> bool {
self.exit_status.load(std::sync::atomic::Ordering::SeqCst)
}
}

View File

@@ -0,0 +1,14 @@
mod exec_command_params;
mod exec_command_session;
mod responses_api;
mod session_id;
mod session_manager;
pub use exec_command_params::ExecCommandParams;
pub use exec_command_params::WriteStdinParams;
pub(crate) use exec_command_session::ExecCommandSession;
pub use responses_api::EXEC_COMMAND_TOOL_NAME;
pub use responses_api::WRITE_STDIN_TOOL_NAME;
pub use responses_api::create_exec_command_tool_for_responses_api;
pub use responses_api::create_write_stdin_tool_for_responses_api;
pub use session_manager::SessionManager as ExecSessionManager;

View File

@@ -3,11 +3,6 @@ use std::collections::BTreeMap;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::ResponsesApiTool;
pub use codex_agent::exec_command::ExecCommandOutput;
pub use codex_agent::exec_command::ExecCommandParams;
pub use codex_agent::exec_command::ExecSessionManager;
pub use codex_agent::exec_command::WriteStdinParams;
pub const EXEC_COMMAND_TOOL_NAME: &str = "exec_command";
pub const WRITE_STDIN_TOOL_NAME: &str = "write_stdin";

View File

@@ -2,4 +2,4 @@ use serde::Deserialize;
use serde::Serialize;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct SessionId(pub u32);
pub(crate) struct SessionId(pub u32);

View File

@@ -5,7 +5,6 @@ use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::AtomicU32;
use std::vec::Vec;
use portable_pty::CommandBuilder;
use portable_pty::PtySize;
@@ -29,7 +28,6 @@ pub struct SessionManager {
sessions: Mutex<HashMap<SessionId, ExecCommandSession>>,
}
#[allow(dead_code)]
#[derive(Debug)]
pub struct ExecCommandOutput {
wall_time: Duration,
@@ -39,7 +37,7 @@ pub struct ExecCommandOutput {
}
impl ExecCommandOutput {
pub fn to_text_output(&self) -> String {
pub(crate) fn to_text_output(&self) -> String {
let wall_time_secs = self.wall_time.as_secs_f32();
let termination_status = match self.exit_status {
ExitStatus::Exited(code) => format!("Process exited with code {code}"),
@@ -63,7 +61,6 @@ Output:
}
}
#[allow(dead_code)]
#[derive(Debug)]
pub enum ExitStatus {
Exited(i32),
@@ -82,11 +79,7 @@ impl SessionManager {
.fetch_add(1, std::sync::atomic::Ordering::SeqCst),
);
let (session, mut output_rx, mut exit_rx): (
ExecCommandSession,
tokio::sync::broadcast::Receiver<Vec<u8>>,
tokio::sync::oneshot::Receiver<i32>,
) = create_exec_command_session(params.clone())
let (session, mut output_rx, mut exit_rx) = create_exec_command_session(params.clone())
.await
.map_err(|err| {
format!(

View File

@@ -1 +1,7 @@
pub use codex_agent::function_tool::*;
use thiserror::Error;
#[derive(Debug, Error, PartialEq)]
pub enum FunctionCallError {
#[error("{0}")]
RespondToModel(String),
}

View File

@@ -0,0 +1,355 @@
use std::fs as std_fs;
use std::path::Path;
use std::path::PathBuf;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_protocol::mcp_protocol::ConversationId;
use tokio::fs;
use tokio::fs::OpenOptions;
use tokio::io::AsyncWriteExt;
use tokio::process::Command;
use tracing::info;
use tracing::warn;
/// Represents a linked git worktree managed by Codex.
///
/// The handle tracks whether Codex created the worktree for the current
/// conversation. It leaves the checkout in place until [`remove`] is invoked.
pub struct WorktreeHandle {
repo_root: PathBuf,
path: PathBuf,
}
impl WorktreeHandle {
/// Create (or reuse) a worktree rooted at
/// `<repo_root>/codex/worktree/<conversation_id>`.
pub async fn create(repo_root: &Path, conversation_id: &ConversationId) -> Result<Self> {
if !repo_root.exists() {
return Err(anyhow!(
"git worktree root `{}` does not exist",
repo_root.display()
));
}
let repo_root = repo_root.to_path_buf();
let codex_dir = repo_root.join("codex");
let codex_worktree_dir = codex_dir.join("worktree");
fs::create_dir_all(&codex_worktree_dir)
.await
.with_context(|| {
format!(
"failed to create codex worktree directory at `{}`",
codex_worktree_dir.display()
)
})?;
let path = codex_worktree_dir.join(conversation_id.to_string());
let is_registered = worktree_registered(&repo_root, &path).await?;
if is_registered {
if path.exists() {
if let Err(err) = ensure_codex_excluded(&repo_root).await {
warn!("failed to add codex worktree path to git exclude: {err:#}");
}
info!(
worktree = %path.display(),
"reusing existing git worktree for conversation"
);
return Ok(Self { repo_root, path });
}
warn!(
worktree = %path.display(),
"git worktree is registered but missing on disk; pruning stale entry"
);
run_git_command(&repo_root, ["worktree", "prune", "--expire", "now"])
.await
.with_context(|| {
format!(
"failed to prune git worktrees while recovering `{}`",
path.display()
)
})?;
if worktree_registered(&repo_root, &path).await? {
return Err(anyhow!(
"git worktree `{}` is registered but missing on disk; run `git worktree prune --expire now` to remove the stale entry",
path.display()
));
}
info!(
worktree = %path.display(),
"recreating git worktree for conversation after pruning stale registration"
);
}
if path.exists() {
return Err(anyhow!(
"git worktree path `{}` already exists but is not registered; remove it manually",
path.display()
));
}
run_git_command(
&repo_root,
[
"worktree",
"add",
"--detach",
path.to_str().ok_or_else(|| {
anyhow!(
"failed to convert worktree path `{}` to UTF-8",
path.display()
)
})?,
"HEAD",
],
)
.await
.with_context(|| format!("failed to create git worktree at `{}`", path.display()))?;
if let Err(err) = ensure_codex_excluded(&repo_root).await {
warn!("failed to add codex worktree path to git exclude: {err:#}");
}
info!(
worktree = %path.display(),
"created git worktree for conversation"
);
Ok(Self { repo_root, path })
}
/// Absolute path to the worktree checkout on disk.
pub fn path(&self) -> &Path {
&self.path
}
/// Remove the worktree and prune metadata from the repository.
pub async fn remove(self) -> Result<()> {
let path = self.path.clone();
// `git worktree remove` fails if refs are missing or the checkout is dirty.
// Use --force to ensure best effort removal; the user explicitly requested it.
run_git_command(
&self.repo_root,
[
"worktree",
"remove",
"--force",
path.to_str().ok_or_else(|| {
anyhow!(
"failed to convert worktree path `{}` to UTF-8",
path.display()
)
})?,
],
)
.await
.with_context(|| format!("failed to remove git worktree `{}`", path.display()))?;
// Prune dangling metadata so repeated sessions do not accumulate entries.
if let Err(err) =
run_git_command(&self.repo_root, ["worktree", "prune", "--expire", "now"]).await
{
warn!("failed to prune git worktrees: {err:#}");
}
Ok(())
}
}
async fn worktree_registered(repo_root: &Path, target: &Path) -> Result<bool> {
let output = run_git_command(repo_root, ["worktree", "list", "--porcelain"]).await?;
let stdout = String::from_utf8(output.stdout)?;
let target_canon = std_fs::canonicalize(target).unwrap_or_else(|_| target.to_path_buf());
for line in stdout.lines() {
if let Some(path) = line.strip_prefix("worktree ") {
let candidate = Path::new(path);
let candidate_canon =
std_fs::canonicalize(candidate).unwrap_or_else(|_| candidate.to_path_buf());
if candidate_canon == target_canon {
return Ok(true);
}
}
}
Ok(false)
}
async fn run_git_command<'a>(
repo_root: &Path,
args: impl IntoIterator<Item = &'a str>,
) -> Result<std::process::Output> {
let mut cmd = Command::new("git");
cmd.args(args);
cmd.current_dir(repo_root);
let output = cmd
.output()
.await
.with_context(|| format!("failed to execute git command in `{}`", repo_root.display()))?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
let status = output
.status
.code()
.map(|c| c.to_string())
.unwrap_or_else(|| "signal".to_string());
return Err(anyhow!("git command exited with status {status}: {stderr}",));
}
Ok(output)
}
async fn ensure_codex_excluded(repo_root: &Path) -> Result<()> {
const PATTERN: &str = "/codex/";
let git_dir_out = run_git_command(repo_root, ["rev-parse", "--git-dir"]).await?;
let git_dir_str = String::from_utf8(git_dir_out.stdout)?.trim().to_string();
let git_dir_path = if Path::new(&git_dir_str).is_absolute() {
PathBuf::from(&git_dir_str)
} else {
repo_root.join(&git_dir_str)
};
let info_dir = git_dir_path.join("info");
fs::create_dir_all(&info_dir).await?;
let exclude_path = info_dir.join("exclude");
let existing_bytes = fs::read(&exclude_path).await.unwrap_or_default();
let existing = String::from_utf8(existing_bytes).unwrap_or_default();
if existing.lines().any(|line| line.trim() == PATTERN) {
return Ok(());
}
let mut file = OpenOptions::new()
.create(true)
.append(true)
.open(&exclude_path)
.await?;
if !existing.is_empty() && !existing.ends_with('\n') {
file.write_all(b"\n").await?;
}
file.write_all(PATTERN.as_bytes()).await?;
file.write_all(b"\n").await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
const GIT_ENV: [(&str, &str); 2] = [
("GIT_CONFIG_GLOBAL", "/dev/null"),
("GIT_CONFIG_NOSYSTEM", "1"),
];
async fn init_repo() -> (TempDir, PathBuf) {
let temp = TempDir::new().expect("tempdir");
let repo_path = temp.path().join("repo");
fs::create_dir_all(&repo_path)
.await
.expect("create repo dir");
run_git_with_env(&repo_path, ["init"], &GIT_ENV)
.await
.expect("git init");
run_git_with_env(&repo_path, ["config", "user.name", "Test User"], &GIT_ENV)
.await
.expect("config user.name");
run_git_with_env(
&repo_path,
["config", "user.email", "test@example.com"],
&GIT_ENV,
)
.await
.expect("config user.email");
fs::write(repo_path.join("README.md"), b"hello world")
.await
.expect("write file");
run_git_with_env(&repo_path, ["add", "README.md"], &GIT_ENV)
.await
.expect("git add");
run_git_with_env(&repo_path, ["commit", "-m", "init"], &GIT_ENV)
.await
.expect("git commit");
(temp, repo_path)
}
async fn run_git_with_env<'a>(
cwd: &Path,
args: impl IntoIterator<Item = &'a str>,
envs: &[(&str, &str)],
) -> Result<()> {
let mut cmd = Command::new("git");
cmd.args(args);
cmd.current_dir(cwd);
for (key, value) in envs {
cmd.env(key, value);
}
let status = cmd.status().await.context("failed to spawn git command")?;
if !status.success() {
return Err(anyhow!(
"git command exited with status {status} (cwd: {})",
cwd.display()
));
}
Ok(())
}
async fn is_registered(repo_root: &Path, path: &Path) -> bool {
worktree_registered(repo_root, path).await.unwrap_or(false)
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn creates_and_removes_worktree() {
let (_temp, repo) = init_repo().await;
let conversation_id = ConversationId::new();
let handle = WorktreeHandle::create(&repo, &conversation_id)
.await
.expect("create worktree");
let path = handle.path().to_path_buf();
assert!(path.exists(), "worktree path should exist on disk");
assert!(
is_registered(&repo, &path).await,
"worktree should be registered"
);
handle.remove().await.expect("remove worktree");
assert!(
!is_registered(&repo, &path).await,
"worktree should be removed from registration"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn reuses_existing_worktree() {
let (_temp, repo) = init_repo().await;
let conversation_id = ConversationId::new();
let first = WorktreeHandle::create(&repo, &conversation_id)
.await
.expect("create worktree");
let path = first.path().to_path_buf();
drop(first);
let second = WorktreeHandle::create(&repo, &conversation_id)
.await
.expect("reuse worktree");
assert_eq!(path, second.path());
assert!(is_registered(&repo, second.path()).await);
second.remove().await.expect("remove worktree");
}
}

View File

@@ -1,89 +0,0 @@
use anyhow::Context;
use serde::Deserialize;
use serde::Serialize;
use std::io::ErrorKind;
use std::path::Path;
use std::path::PathBuf;
pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default = "default_gpt_5_codex_model_prompt_seen")]
pub gpt_5_codex_model_prompt_seen: bool,
}
const fn default_gpt_5_codex_model_prompt_seen() -> bool {
true
}
impl Default for InternalStorage {
fn default() -> Self {
Self {
storage_path: PathBuf::new(),
gpt_5_codex_model_prompt_seen: default_gpt_5_codex_model_prompt_seen(),
}
}
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.
impl InternalStorage {
pub fn load(codex_home: &Path) -> Self {
let storage_path = codex_home.join(INTERNAL_STORAGE_FILE);
match std::fs::read_to_string(&storage_path) {
Ok(serialized) => match serde_json::from_str::<Self>(&serialized) {
Ok(mut storage) => {
storage.storage_path = storage_path;
storage
}
Err(error) => {
tracing::warn!("failed to parse internal storage: {error:?}");
Self::empty(storage_path)
}
},
Err(error) => {
if error.kind() == ErrorKind::NotFound {
tracing::debug!(
"internal storage not found at {}; initializing defaults",
storage_path.display()
);
} else {
tracing::warn!("failed to read internal storage: {error:?}");
}
Self::empty(storage_path)
}
}
}
fn empty(storage_path: PathBuf) -> Self {
Self {
storage_path,
..Default::default()
}
}
pub async fn persist(&self) -> anyhow::Result<()> {
let serialized = serde_json::to_string_pretty(self)?;
if let Some(parent) = self.storage_path.parent() {
tokio::fs::create_dir_all(parent).await.with_context(|| {
format!(
"failed to create internal storage directory at {}",
parent.display()
)
})?;
}
tokio::fs::write(&self.storage_path, serialized)
.await
.with_context(|| {
format!(
"failed to persist internal storage at {}",
self.storage_path.display()
)
})
}
}

View File

@@ -5,13 +5,9 @@
// the TUI or the tracing stack).
#![deny(clippy::print_stdout, clippy::print_stderr)]
pub mod agent_config;
pub mod agent_services;
mod apply_patch;
pub mod auth;
// `bash` helpers now live in `codex-agent`; re-export the module for
// backwards compatibility with existing imports.
pub use codex_agent::bash;
pub use codex_agent::command_safety;
pub mod bash;
mod chat_completions;
mod client;
mod client_common;
@@ -19,6 +15,7 @@ pub mod codex;
mod codex_conversation;
pub mod token_data;
pub use codex_conversation::CodexConversation;
mod command_safety;
pub mod config;
pub mod config_edit;
pub mod config_profile;
@@ -32,7 +29,7 @@ mod exec_command;
pub mod exec_env;
mod flags;
pub mod git_info;
pub mod internal_storage;
mod git_worktree;
pub mod landlock;
mod mcp_connection_manager;
mod mcp_tool_call;
@@ -43,6 +40,8 @@ mod truncate;
mod unified_exec;
mod user_instructions;
pub use model_provider_info::BUILT_IN_OSS_MODEL_PROVIDER_ID;
pub use model_provider_info::ModelProviderInfo;
pub use model_provider_info::WireApi;
pub use model_provider_info::built_in_model_providers;
pub use model_provider_info::create_oss_provider_with_base_url;
mod conversation_manager;
@@ -61,30 +60,30 @@ mod openai_tools;
pub mod plan_tool;
pub mod project_doc;
mod rollout;
pub mod sandbox;
pub(crate) mod safety;
pub mod seatbelt;
pub mod shell;
pub mod spawn;
pub mod terminal;
mod tool_apply_patch;
pub mod turn_diff_tracker;
pub use codex_protocol::protocol::SessionMeta;
pub use rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use rollout::RolloutRecorder;
pub use rollout::SESSIONS_SUBDIR;
pub use rollout::SessionMeta;
pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;
pub use rollout::list::find_conversation_path_by_id_str;
mod function_tool;
mod state;
mod tasks;
mod user_notification;
pub mod util;
pub use codex_agent::apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use codex_agent::command_safety::is_safe_command;
pub use codex_agent::safety::get_platform_sandbox;
pub use apply_patch::CODEX_APPLY_PATCH_ARG1;
pub use command_safety::is_safe_command;
pub use safety::get_platform_sandbox;
// Re-export the protocol types from the standalone `codex-protocol` crate so existing
// `codex_core::protocol::...` references continue to work across the workspace.
pub use codex_protocol::protocol;
@@ -92,8 +91,6 @@ pub use codex_protocol::protocol;
// as those in the protocol crate when constructing protocol messages.
pub use codex_protocol::config_types as protocol_config_types;
pub use agent_config::AgentConfig;
pub use agent_services::DefaultSandboxManager;
pub use client::ModelClient;
pub use client_common::Prompt;
pub use client_common::REVIEW_PROMPT;
@@ -101,14 +98,6 @@ pub use client_common::ResponseEvent;
pub use client_common::ResponseStream;
pub use codex::compact::content_items_to_text;
pub use codex::compact::is_session_prefix_message;
pub use codex_agent::ModelProviderInfo;
pub use codex_agent::WireApi;
pub use codex_agent::services::CredentialsProvider;
pub use codex_agent::services::McpInterface;
pub use codex_agent::services::Notifier;
pub use codex_agent::services::ProviderAuth;
pub use codex_agent::services::RolloutSink;
pub use codex_agent::services::SandboxManager;
pub use codex_protocol::models::ContentItem;
pub use codex_protocol::models::LocalShellAction;
pub use codex_protocol::models::LocalShellExecAction;

View File

@@ -16,6 +16,7 @@ use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use codex_mcp_client::McpClient;
use codex_rmcp_client::RmcpClient;
use mcp_types::ClientCapabilities;
use mcp_types::Implementation;
use mcp_types::Tool;
@@ -28,6 +29,7 @@ use tracing::info;
use tracing::warn;
use crate::config_types::McpServerConfig;
use crate::config_types::McpServerTransportConfig;
/// Delimiter used to separate the server name from the tool name in a fully
/// qualified tool name.
@@ -86,11 +88,75 @@ struct ToolInfo {
}
struct ManagedClient {
client: Arc<McpClient>,
client: McpClientAdapter,
startup_timeout: Duration,
tool_timeout: Option<Duration>,
}
#[derive(Clone)]
enum McpClientAdapter {
Legacy(Arc<McpClient>),
Rmcp(Arc<RmcpClient>),
}
impl McpClientAdapter {
async fn new_stdio_client(
use_rmcp_client: bool,
program: OsString,
args: Vec<OsString>,
env: Option<HashMap<String, String>>,
params: mcp_types::InitializeRequestParams,
startup_timeout: Duration,
) -> Result<Self> {
info!(
"new_stdio_client use_rmcp_client: {use_rmcp_client} program: {program:?} args: {args:?} env: {env:?} params: {params:?} startup_timeout: {startup_timeout:?}"
);
if use_rmcp_client {
let client = Arc::new(RmcpClient::new_stdio_client(program, args, env).await?);
client.initialize(params, Some(startup_timeout)).await?;
Ok(McpClientAdapter::Rmcp(client))
} else {
let client = Arc::new(McpClient::new_stdio_client(program, args, env).await?);
client.initialize(params, Some(startup_timeout)).await?;
Ok(McpClientAdapter::Legacy(client))
}
}
async fn new_streamable_http_client(
url: String,
bearer_token: Option<String>,
params: mcp_types::InitializeRequestParams,
startup_timeout: Duration,
) -> Result<Self> {
let client = Arc::new(RmcpClient::new_streamable_http_client(url, bearer_token)?);
client.initialize(params, Some(startup_timeout)).await?;
Ok(McpClientAdapter::Rmcp(client))
}
async fn list_tools(
&self,
params: Option<mcp_types::ListToolsRequestParams>,
timeout: Option<Duration>,
) -> Result<mcp_types::ListToolsResult> {
match self {
McpClientAdapter::Legacy(client) => client.list_tools(params, timeout).await,
McpClientAdapter::Rmcp(client) => client.list_tools(params, timeout).await,
}
}
async fn call_tool(
&self,
name: String,
arguments: Option<serde_json::Value>,
timeout: Option<Duration>,
) -> Result<mcp_types::CallToolResult> {
match self {
McpClientAdapter::Legacy(client) => client.call_tool(name, arguments, timeout).await,
McpClientAdapter::Rmcp(client) => client.call_tool(name, arguments, timeout).await,
}
}
}
/// A thin wrapper around a set of running [`McpClient`] instances.
#[derive(Default)]
pub(crate) struct McpConnectionManager {
@@ -115,6 +181,7 @@ impl McpConnectionManager {
/// user should be informed about these errors.
pub async fn new(
mcp_servers: HashMap<String, McpServerConfig>,
use_rmcp_client: bool,
) -> Result<(Self, ClientStartErrors)> {
// Early exit if no servers are configured.
if mcp_servers.is_empty() {
@@ -136,58 +203,72 @@ impl McpConnectionManager {
continue;
}
let startup_timeout = cfg.startup_timeout_sec.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
if matches!(
cfg.transport,
McpServerTransportConfig::StreamableHttp { .. }
) && !use_rmcp_client
{
info!(
"skipping MCP server `{}` configured with url because rmcp client is disabled",
server_name
);
continue;
}
let startup_timeout = cfg.startup_timeout_sec.unwrap_or(DEFAULT_STARTUP_TIMEOUT);
let tool_timeout = cfg.tool_timeout_sec.unwrap_or(DEFAULT_TOOL_TIMEOUT);
let use_rmcp_client_flag = use_rmcp_client;
join_set.spawn(async move {
let McpServerConfig {
command, args, env, ..
} = cfg;
let client_res = McpClient::new_stdio_client(
command.into(),
args.into_iter().map(OsString::from).collect(),
env,
)
.await;
match client_res {
Ok(client) => {
// Initialize the client.
let params = mcp_types::InitializeRequestParams {
capabilities: ClientCapabilities {
experimental: None,
roots: None,
sampling: None,
// https://modelcontextprotocol.io/specification/2025-06-18/client/elicitation#capabilities
// indicates this should be an empty object.
elicitation: Some(json!({})),
},
client_info: Implementation {
name: "codex-mcp-client".to_owned(),
version: env!("CARGO_PKG_VERSION").to_owned(),
title: Some("Codex".into()),
// This field is used by Codex when it is an MCP
// server: it should not be used when Codex is
// an MCP client.
user_agent: None,
},
protocol_version: mcp_types::MCP_SCHEMA_VERSION.to_owned(),
};
let initialize_notification_params = None;
let init_result = client
.initialize(
params,
initialize_notification_params,
Some(startup_timeout),
)
.await;
(
(server_name, tool_timeout),
init_result.map(|_| (client, startup_timeout)),
let McpServerConfig { transport, .. } = cfg;
let params = mcp_types::InitializeRequestParams {
capabilities: ClientCapabilities {
experimental: None,
roots: None,
sampling: None,
// https://modelcontextprotocol.io/specification/2025-06-18/client/elicitation#capabilities
// indicates this should be an empty object.
elicitation: Some(json!({})),
},
client_info: Implementation {
name: "codex-mcp-client".to_owned(),
version: env!("CARGO_PKG_VERSION").to_owned(),
title: Some("Codex".into()),
// This field is used by Codex when it is an MCP
// server: it should not be used when Codex is
// an MCP client.
user_agent: None,
},
protocol_version: mcp_types::MCP_SCHEMA_VERSION.to_owned(),
};
let client = match transport {
McpServerTransportConfig::Stdio { command, args, env } => {
let command_os: OsString = command.into();
let args_os: Vec<OsString> = args.into_iter().map(Into::into).collect();
McpClientAdapter::new_stdio_client(
use_rmcp_client_flag,
command_os,
args_os,
env,
params.clone(),
startup_timeout,
)
.await
}
McpServerTransportConfig::StreamableHttp { url, bearer_token } => {
McpClientAdapter::new_streamable_http_client(
url,
bearer_token,
params,
startup_timeout,
)
.await
}
Err(e) => ((server_name, tool_timeout), Err(e.into())),
}
.map(|c| (c, startup_timeout));
((server_name, tool_timeout), client)
});
}
@@ -207,7 +288,7 @@ impl McpConnectionManager {
clients.insert(
server_name,
ManagedClient {
client: Arc::new(client),
client,
startup_timeout,
tool_timeout: Some(tool_timeout),
},

View File

@@ -27,7 +27,7 @@ use std::time::Duration;
use tokio::fs;
use tokio::io::AsyncReadExt;
use crate::agent_config::AgentConfig;
use crate::config::Config;
use crate::config_types::HistoryPersistence;
use codex_protocol::mcp_protocol::ConversationId;
@@ -49,7 +49,7 @@ pub struct HistoryEntry {
pub text: String,
}
fn history_filepath(config: &AgentConfig) -> PathBuf {
fn history_filepath(config: &Config) -> PathBuf {
let mut path = config.codex_home.clone();
path.push(HISTORY_FILENAME);
path
@@ -61,7 +61,7 @@ fn history_filepath(config: &AgentConfig) -> PathBuf {
pub(crate) async fn append_entry(
text: &str,
conversation_id: &ConversationId,
config: &AgentConfig,
config: &Config,
) -> Result<()> {
match config.history.persistence {
HistoryPersistence::SaveAll => {
@@ -140,7 +140,7 @@ pub(crate) async fn append_entry(
/// Asynchronously fetch the history file's *identifier* (inode on Unix) and
/// the current number of entries by counting newline characters.
pub(crate) async fn history_metadata(config: &AgentConfig) -> (u64, usize) {
pub(crate) async fn history_metadata(config: &Config) -> (u64, usize) {
let path = history_filepath(config);
#[cfg(unix)]
@@ -187,7 +187,7 @@ pub(crate) async fn history_metadata(config: &AgentConfig) -> (u64, usize) {
/// Note this function is not async because it uses a sync advisory file
/// locking API.
#[cfg(unix)]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &AgentConfig) -> Option<HistoryEntry> {
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
use std::io::BufRead;
use std::io::BufReader;
use std::os::unix::fs::MetadataExt;

View File

@@ -1,12 +1,48 @@
use crate::config_types::ReasoningSummaryFormat;
use codex_agent::ApplyPatchToolType;
pub use codex_agent::ModelFamily;
use crate::tool_apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
const GPT_5_CODEX_INSTRUCTIONS: &str = include_str!("../gpt_5_codex_prompt.md");
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
/// The full model slug used to derive this model family, e.g.
/// "gpt-4.1-2025-04-14".
pub slug: String,
/// The model family name, e.g. "gpt-4.1". Note this should able to be used
/// with [`crate::openai_model_info::get_model_info`].
pub family: String,
/// True if the model needs additional instructions on how to use the
/// "virtual" `apply_patch` CLI.
pub needs_special_apply_patch_instructions: bool,
// Whether the `reasoning` field can be set when making a request to this
// model family. Note it has `effort` and `summary` subfields (though
// `summary` is optional).
pub supports_reasoning_summaries: bool,
// Define if we need a special handling of reasoning summary
pub reasoning_summary_format: ReasoningSummaryFormat,
// This should be set to true when the model expects a tool named
// "local_shell" to be provided. Its contract must be understood natively by
// the model such that its description can be omitted.
// See https://platform.openai.com/docs/guides/tools-local-shell
pub uses_local_shell_tool: bool,
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
// Instructions to use for querying the model
pub base_instructions: String,
}
macro_rules! model_family {
(
$slug:expr, $family:expr $(, $key:ident : $value:expr )* $(,)?

View File

@@ -5,19 +5,15 @@
//! 2. User-defined entries inside `~/.codex/config.toml` under the `model_providers`
//! key. These override or extend the defaults at runtime.
use async_trait::async_trait;
pub use codex_agent::ModelProviderInfo;
pub use codex_agent::WireApi;
use crate::CodexAuth;
use codex_protocol::mcp_protocol::AuthMode;
use serde::Deserialize;
use serde::Serialize;
use std::collections::HashMap;
use std::env::VarError;
use std::sync::Arc;
use std::time::Duration;
use crate::CodexAuth;
use crate::ProviderAuth;
use crate::error::EnvVarError;
const DEFAULT_STREAM_IDLE_TIMEOUT_MS: u64 = 300_000;
const DEFAULT_STREAM_MAX_RETRIES: u64 = 5;
const DEFAULT_REQUEST_MAX_RETRIES: u64 = 4;
@@ -26,58 +22,135 @@ const MAX_STREAM_MAX_RETRIES: u64 = 100;
/// Hard cap for user-configured `request_max_retries`.
const MAX_REQUEST_MAX_RETRIES: u64 = 100;
#[async_trait]
pub trait ModelProviderExt {
async fn create_request_builder(
&self,
client: &reqwest::Client,
auth: &Option<Arc<dyn ProviderAuth>>,
) -> crate::error::Result<reqwest::RequestBuilder>;
/// Wire protocol that the provider speaks. Most third-party services only
/// implement the classic OpenAI Chat Completions JSON schema, whereas OpenAI
/// itself (and a handful of others) additionally expose the more modern
/// *Responses* API. The two protocols use different request/response shapes
/// and *cannot* be auto-detected at runtime, therefore each provider entry
/// must declare which one it expects.
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum WireApi {
/// The Responses API exposed by OpenAI at `/v1/responses`.
Responses,
fn get_full_url(&self, auth: &Option<Arc<dyn ProviderAuth>>) -> String;
fn is_azure_responses_endpoint(&self) -> bool;
fn apply_http_headers(&self, builder: reqwest::RequestBuilder) -> reqwest::RequestBuilder;
fn api_key(&self) -> crate::error::Result<Option<String>>;
fn request_max_retries(&self) -> u64;
fn stream_max_retries(&self) -> u64;
fn stream_idle_timeout(&self) -> Duration;
/// Regular Chat Completions compatible with `/v1/chat/completions`.
#[default]
Chat,
}
#[async_trait]
impl ModelProviderExt for ModelProviderInfo {
async fn create_request_builder(
&self,
client: &reqwest::Client,
auth: &Option<Arc<dyn ProviderAuth>>,
/// Serializable representation of a provider definition.
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct ModelProviderInfo {
/// Friendly display name.
pub name: String,
/// Base URL for the provider's OpenAI-compatible API.
pub base_url: Option<String>,
/// Environment variable that stores the user's API key for this provider.
pub env_key: Option<String>,
/// Optional instructions to help the user get a valid value for the
/// variable and set it.
pub env_key_instructions: Option<String>,
/// Which wire protocol this provider expects.
#[serde(default)]
pub wire_api: WireApi,
/// Optional query parameters to append to the base URL.
pub query_params: Option<HashMap<String, String>>,
/// Additional HTTP headers to include in requests to this provider where
/// the (key, value) pairs are the header name and value.
pub http_headers: Option<HashMap<String, String>>,
/// Optional HTTP headers to include in requests to this provider where the
/// (key, value) pairs are the header name and _environment variable_ whose
/// value should be used. If the environment variable is not set, or the
/// value is empty, the header will not be included in the request.
pub env_http_headers: Option<HashMap<String, String>>,
/// Maximum number of times to retry a failed HTTP request to this provider.
pub request_max_retries: Option<u64>,
/// Number of times to retry reconnecting a dropped streaming response before failing.
pub stream_max_retries: Option<u64>,
/// Idle timeout (in milliseconds) to wait for activity on a streaming response before treating
/// the connection as lost.
pub stream_idle_timeout_ms: Option<u64>,
/// Does this provider require an OpenAI API Key or ChatGPT login token? If true,
/// user is presented with login screen on first run, and login preference and token/key
/// are stored in auth.json. If false (which is the default), login screen is skipped,
/// and API key (if needed) comes from the "env_key" environment variable.
#[serde(default)]
pub requires_openai_auth: bool,
}
impl ModelProviderInfo {
/// Construct a `POST` RequestBuilder for the given URL using the provided
/// reqwest Client applying:
/// • provider-specific headers (static + env based)
/// • Bearer auth header when an API key is available.
/// • Auth token for OAuth.
///
/// If the provider declares an `env_key` but the variable is missing/empty, returns an [`Err`] identical to the
/// one produced by [`ModelProviderInfo::api_key`].
pub async fn create_request_builder<'a>(
&'a self,
client: &'a reqwest::Client,
auth: &Option<CodexAuth>,
) -> crate::error::Result<reqwest::RequestBuilder> {
let effective_auth: Option<Arc<dyn ProviderAuth>> = match self.api_key()? {
Some(key) => Some(Arc::new(CodexAuth::from_api_key(&key))),
None => auth.clone(),
let effective_auth = match self.api_key() {
Ok(Some(key)) => Some(CodexAuth::from_api_key(&key)),
Ok(None) => auth.clone(),
Err(err) => {
if auth.is_some() {
auth.clone()
} else {
return Err(err);
}
}
};
let url = self.get_full_url(&effective_auth);
let mut builder = client.post(url);
if let Some(auth) = effective_auth.as_ref() {
builder = builder.bearer_auth(auth.access_token().await?);
builder = builder.bearer_auth(auth.get_token().await?);
}
Ok(self.apply_http_headers(builder))
}
fn get_full_url(&self, auth: &Option<Arc<dyn ProviderAuth>>) -> String {
let default_base_url = if auth.as_ref().map(|a| a.mode()) == Some(AuthMode::ChatGPT) {
fn get_query_string(&self) -> String {
self.query_params
.as_ref()
.map_or_else(String::new, |params| {
let full_params = params
.iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join("&");
format!("?{full_params}")
})
}
pub(crate) fn get_full_url(&self, auth: &Option<CodexAuth>) -> String {
let default_base_url = if matches!(
auth,
Some(CodexAuth {
mode: AuthMode::ChatGPT,
..
})
) {
"https://chatgpt.com/backend-api/codex"
} else {
"https://api.openai.com/v1"
};
let query_string = get_query_string(self);
let query_string = self.get_query_string();
let base_url = self
.base_url
.clone()
@@ -89,7 +162,7 @@ impl ModelProviderExt for ModelProviderInfo {
}
}
fn is_azure_responses_endpoint(&self) -> bool {
pub(crate) fn is_azure_responses_endpoint(&self) -> bool {
if self.wire_api != WireApi::Responses {
return false;
}
@@ -104,6 +177,9 @@ impl ModelProviderExt for ModelProviderInfo {
.unwrap_or(false)
}
/// Apply provider-specific HTTP headers (both static and environment-based)
/// onto an existing `reqwest::RequestBuilder` and return the updated
/// builder.
fn apply_http_headers(&self, mut builder: reqwest::RequestBuilder) -> reqwest::RequestBuilder {
if let Some(extra) = &self.http_headers {
for (k, v) in extra {
@@ -123,7 +199,10 @@ impl ModelProviderExt for ModelProviderInfo {
builder
}
fn api_key(&self) -> crate::error::Result<Option<String>> {
/// If `env_key` is Some, returns the API key for this provider if present
/// (and non-empty) in the environment. If `env_key` is required but
/// cannot be found, returns an error.
pub fn api_key(&self) -> crate::error::Result<Option<String>> {
match &self.env_key {
Some(env_key) => {
let env_value = std::env::var(env_key);
@@ -146,39 +225,28 @@ impl ModelProviderExt for ModelProviderInfo {
}
}
fn request_max_retries(&self) -> u64 {
/// Effective maximum number of request retries for this provider.
pub fn request_max_retries(&self) -> u64 {
self.request_max_retries
.unwrap_or(DEFAULT_REQUEST_MAX_RETRIES)
.min(MAX_REQUEST_MAX_RETRIES)
}
fn stream_max_retries(&self) -> u64 {
/// Effective maximum number of stream reconnection attempts for this provider.
pub fn stream_max_retries(&self) -> u64 {
self.stream_max_retries
.unwrap_or(DEFAULT_STREAM_MAX_RETRIES)
.min(MAX_STREAM_MAX_RETRIES)
}
fn stream_idle_timeout(&self) -> Duration {
/// Effective idle timeout for streaming responses.
pub fn stream_idle_timeout(&self) -> Duration {
self.stream_idle_timeout_ms
.map(Duration::from_millis)
.unwrap_or(Duration::from_millis(DEFAULT_STREAM_IDLE_TIMEOUT_MS))
}
}
fn get_query_string(provider: &ModelProviderInfo) -> String {
provider
.query_params
.as_ref()
.map_or_else(String::new, |params| {
let full_params = params
.iter()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join("&");
format!("?{full_params}")
})
}
const DEFAULT_OLLAMA_PORT: u32 = 11434;
pub const BUILT_IN_OSS_MODEL_PROVIDER_ID: &str = "oss";
@@ -187,11 +255,20 @@ pub const BUILT_IN_OSS_MODEL_PROVIDER_ID: &str = "oss";
pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
use ModelProviderInfo as P;
// We do not want to be in the business of adjucating which third-party
// providers are bundled with Codex CLI, so we only include the OpenAI and
// open source ("oss") providers by default. Users are encouraged to add to
// `model_providers` in config.toml to add their own providers.
[
(
"openai",
P {
name: "OpenAI".into(),
// Allow users to override the default OpenAI endpoint by
// exporting `OPENAI_BASE_URL`. This is useful when pointing
// Codex at a proxy, mock server, or Azure-style deployment
// without requiring a full TOML override for the built-in
// OpenAI provider.
base_url: std::env::var("OPENAI_BASE_URL")
.ok()
.filter(|v| !v.trim().is_empty()),
@@ -215,6 +292,7 @@ pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
.into_iter()
.collect(),
),
// Use global defaults for retry/timeout unless overridden in config.toml.
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
@@ -229,6 +307,8 @@ pub fn built_in_model_providers() -> HashMap<String, ModelProviderInfo> {
}
pub fn create_oss_provider() -> ModelProviderInfo {
// These CODEX_OSS_ environment variables are experimental: we may
// switch to reading values from config.toml instead.
let codex_oss_base_url = match std::env::var("CODEX_OSS_BASE_URL")
.ok()
.filter(|v| !v.trim().is_empty())
@@ -281,11 +361,130 @@ mod tests {
use super::*;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn creates_request_builder_with_auth() {
let provider = ModelProviderInfo {
name: "openai".to_string(),
base_url: None,
#[test]
fn test_deserialize_ollama_model_provider_toml() {
let azure_provider_toml = r#"
name = "Ollama"
base_url = "http://localhost:11434/v1"
"#;
let expected_provider = ModelProviderInfo {
name: "Ollama".into(),
base_url: Some("http://localhost:11434/v1".into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Chat,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn test_deserialize_azure_model_provider_toml() {
let azure_provider_toml = r#"
name = "Azure"
base_url = "https://xxxxx.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-04-01-preview" }
"#;
let expected_provider = ModelProviderInfo {
name: "Azure".into(),
base_url: Some("https://xxxxx.openai.azure.com/openai".into()),
env_key: Some("AZURE_OPENAI_API_KEY".into()),
env_key_instructions: None,
wire_api: WireApi::Chat,
query_params: Some(maplit::hashmap! {
"api-version".to_string() => "2025-04-01-preview".to_string(),
}),
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn test_deserialize_example_model_provider_toml() {
let azure_provider_toml = r#"
name = "Example"
base_url = "https://example.com"
env_key = "API_KEY"
http_headers = { "X-Example-Header" = "example-value" }
env_http_headers = { "X-Example-Env-Header" = "EXAMPLE_ENV_VAR" }
"#;
let expected_provider = ModelProviderInfo {
name: "Example".into(),
base_url: Some("https://example.com".into()),
env_key: Some("API_KEY".into()),
env_key_instructions: None,
wire_api: WireApi::Chat,
query_params: None,
http_headers: Some(maplit::hashmap! {
"X-Example-Header".to_string() => "example-value".to_string(),
}),
env_http_headers: Some(maplit::hashmap! {
"X-Example-Env-Header".to_string() => "EXAMPLE_ENV_VAR".to_string(),
}),
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
};
let provider: ModelProviderInfo = toml::from_str(azure_provider_toml).unwrap();
assert_eq!(expected_provider, provider);
}
#[test]
fn detects_azure_responses_base_urls() {
fn provider_for(base_url: &str) -> ModelProviderInfo {
ModelProviderInfo {
name: "test".into(),
base_url: Some(base_url.into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
query_params: None,
http_headers: None,
env_http_headers: None,
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: false,
}
}
let positive_cases = [
"https://foo.openai.azure.com/openai",
"https://foo.openai.azure.us/openai/deployments/bar",
"https://foo.cognitiveservices.azure.cn/openai",
"https://foo.aoai.azure.com/openai",
"https://foo.openai.azure-api.net/openai",
"https://foo.z01.azurefd.net/",
];
for base_url in positive_cases {
let provider = provider_for(base_url);
assert!(
provider.is_azure_responses_endpoint(),
"expected {base_url} to be detected as Azure"
);
}
let named_provider = ModelProviderInfo {
name: "Azure".into(),
base_url: Some("https://example.com".into()),
env_key: None,
env_key_instructions: None,
wire_api: WireApi::Responses,
@@ -295,33 +494,21 @@ mod tests {
request_max_retries: None,
stream_max_retries: None,
stream_idle_timeout_ms: None,
requires_openai_auth: true,
requires_openai_auth: false,
};
let client = reqwest::Client::new();
let auth =
Some(Arc::new(CodexAuth::create_dummy_chatgpt_auth_for_testing())
as Arc<dyn ProviderAuth>);
assert!(named_provider.is_azure_responses_endpoint());
let builder = provider
.create_request_builder(&client, &auth)
.await
.expect("builder");
let request = builder.build().expect("request");
assert_eq!(request.method(), reqwest::Method::POST);
assert_eq!(
request.url().as_str(),
"https://chatgpt.com/backend-api/codex/responses"
);
}
#[test]
fn azure_detection() {
let mut provider = create_oss_provider();
assert!(!provider.is_azure_responses_endpoint());
provider.name = "azure".to_string();
provider.wire_api = WireApi::Responses;
assert!(provider.is_azure_responses_endpoint());
let negative_cases = [
"https://api.openai.com/v1",
"https://example.com/openai",
"https://myproxy.azurewebsites.net/openai",
];
for base_url in negative_cases {
let provider = provider_for(base_url);
assert!(
!provider.is_azure_responses_endpoint(),
"expected {base_url} not to be detected as Azure"
);
}
}
}

View File

@@ -12,7 +12,7 @@
//! that order.
//! 3. We do **not** walk past the Git root.
use crate::agent_config::AgentConfig;
use crate::config::Config;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
use tracing::error;
@@ -26,7 +26,7 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &AgentConfig) -> Option<String> {
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
match read_project_docs(config).await {
Ok(Some(project_doc)) => match &config.user_instructions {
Some(original_instructions) => Some(format!(
@@ -48,7 +48,7 @@ pub(crate) async fn get_user_instructions(config: &AgentConfig) -> Option<String
/// concatenation of all discovered docs. If no documentation file is found the
/// function returns `Ok(None)`. Unexpected I/O failures bubble up as `Err` so
/// callers can decide how to handle them.
pub async fn read_project_docs(config: &AgentConfig) -> std::io::Result<Option<String>> {
pub async fn read_project_docs(config: &Config) -> std::io::Result<Option<String>> {
let max_total = config.project_doc_max_bytes;
if max_total == 0 {
@@ -106,7 +106,7 @@ pub async fn read_project_docs(config: &AgentConfig) -> std::io::Result<Option<S
/// contents. The list is ordered from repository root to the current working
/// directory (inclusive). Symlinks are allowed. When `project_doc_max_bytes`
/// is zero, returns an empty list.
pub fn discover_project_doc_paths(config: &AgentConfig) -> std::io::Result<Vec<PathBuf>> {
pub fn discover_project_doc_paths(config: &Config) -> std::io::Result<Vec<PathBuf>> {
let mut dir = config.cwd.clone();
if let Ok(canon) = dir.canonicalize() {
dir = canon;
@@ -176,7 +176,6 @@ pub fn discover_project_doc_paths(config: &AgentConfig) -> std::io::Result<Vec<P
#[cfg(test)]
mod tests {
use super::*;
use crate::config::Config;
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use std::fs;
@@ -187,7 +186,7 @@ mod tests {
/// optionally specify a custom `instructions` string when `None` the
/// value is cleared to mimic a scenario where no system instructions have
/// been configured.
fn make_config(root: &TempDir, limit: usize, instructions: Option<&str>) -> AgentConfig {
fn make_config(root: &TempDir, limit: usize, instructions: Option<&str>) -> Config {
let codex_home = TempDir::new().unwrap();
let mut config = Config::load_from_base_config_with_overrides(
ConfigToml::default(),
@@ -200,8 +199,7 @@ mod tests {
config.project_doc_max_bytes = limit;
config.user_instructions = instructions.map(ToOwned::to_owned);
AgentConfig::from(&config)
config
}
/// AGENTS.md missing should yield `None`.

View File

@@ -1,13 +1,9 @@
use std::cmp::Reverse;
use std::io;
use std::io::{self};
use std::path::Path;
use std::path::PathBuf;
use codex_file_search as file_search;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use serde_json::Value;
use std::num::NonZero;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
@@ -15,29 +11,40 @@ use time::OffsetDateTime;
use time::PrimitiveDateTime;
use time::format_description::FormatItem;
use time::macros::format_description;
use tokio::fs;
use tokio::io::AsyncBufReadExt;
use uuid::Uuid;
use super::SESSIONS_SUBDIR;
use crate::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
/// Returned page of conversation summaries.
#[derive(Debug, Default, PartialEq)]
pub struct ConversationsPage {
/// Conversation summaries ordered newest first.
pub items: Vec<ConversationItem>,
/// Opaque pagination token to resume after the last item, or `None` if end.
pub next_cursor: Option<Cursor>,
/// Total number of files touched while scanning this request.
pub num_scanned_files: usize,
/// True if a hard scan cap was hit; consider resuming with `next_cursor`.
pub reached_scan_cap: bool,
}
/// Summary information for a conversation rollout file.
#[derive(Debug, PartialEq)]
pub struct ConversationItem {
/// Absolute path to the rollout file.
pub path: PathBuf,
pub head: Vec<Value>,
/// First up to 5 JSONL records parsed as JSON (includes meta line).
pub head: Vec<serde_json::Value>,
}
/// Hard cap to bound worstcase work per request.
const MAX_SCAN_FILES: usize = 100;
const HEAD_RECORD_LIMIT: usize = 10;
/// Pagination cursor identifying a file by timestamp and UUID.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Cursor {
ts: OffsetDateTime,
@@ -75,7 +82,10 @@ impl<'de> serde::Deserialize<'de> for Cursor {
}
}
pub async fn get_conversations(
/// Retrieve recorded conversation file paths with token pagination. The returned `next_cursor`
/// can be supplied on the next call to resume after the last returned item, resilient to
/// concurrent new sessions being appended. Ordering is stable by timestamp desc, then UUID desc.
pub(crate) async fn get_conversations(
codex_home: &Path,
page_size: usize,
cursor: Option<&Cursor>,
@@ -84,57 +94,31 @@ pub async fn get_conversations(
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(ConversationsPage::default());
return Ok(ConversationsPage {
items: Vec::new(),
next_cursor: None,
num_scanned_files: 0,
reached_scan_cap: false,
});
}
let anchor = cursor.cloned();
traverse_directories_for_paths(root, page_size, anchor).await
let result = traverse_directories_for_paths(root.clone(), page_size, anchor).await?;
Ok(result)
}
pub async fn get_conversation(path: &Path) -> io::Result<String> {
fs::read_to_string(path).await
}
pub async fn find_conversation_path_by_id_str(
codex_home: &Path,
id_str: &str,
) -> io::Result<Option<PathBuf>> {
if Uuid::parse_str(id_str).is_err() {
return Ok(None);
}
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(None);
}
let limit = NonZero::new(1).ok_or_else(|| io::Error::other("search limit must be non-zero"))?;
let threads =
NonZero::new(2).ok_or_else(|| io::Error::other("thread pool size must be non-zero"))?;
let cancel = Arc::new(AtomicBool::new(false));
let exclude: Vec<String> = Vec::new();
let compute_indices = false;
let results = file_search::run(
id_str,
limit,
&root,
exclude,
threads,
cancel,
compute_indices,
)
.map_err(|e| io::Error::other(format!("file search failed: {e}")))?;
Ok(results
.matches
.into_iter()
.next()
.map(|m| root.join(m.path)))
/// Load the full contents of a single conversation session file at `path`.
/// Returns the entire file contents as a String.
#[allow(dead_code)]
pub(crate) async fn get_conversation(path: &Path) -> io::Result<String> {
tokio::fs::read_to_string(path).await
}
/// Load conversation file paths from disk using directory traversal.
///
/// Directory layout: `~/.codex/sessions/YYYY/MM/DD/rollout-YYYY-MM-DDThh-mm-ss-<uuid>.jsonl`
/// Returned newest (latest) first.
async fn traverse_directories_for_paths(
root: PathBuf,
page_size: usize,
@@ -173,7 +157,8 @@ async fn traverse_directories_for_paths(
.map(|(ts, id)| (ts, id, name_str.to_string(), path.to_path_buf()))
})
.await?;
day_files.sort_by_key(|(ts, sid, _, _)| (Reverse(*ts), Reverse(*sid)));
// Stable ordering within the same second: (timestamp desc, uuid desc)
day_files.sort_by_key(|(ts, sid, _name_str, _path)| (Reverse(*ts), Reverse(*sid)));
for (ts, sid, _name_str, path) in day_files.into_iter() {
scanned_files += 1;
if scanned_files >= MAX_SCAN_FILES && items.len() >= page_size {
@@ -189,10 +174,13 @@ async fn traverse_directories_for_paths(
if items.len() == page_size {
break 'outer;
}
// Read head and simultaneously detect message events within the same
// first N JSONL records to avoid a second file read.
let (head, saw_session_meta, saw_user_event) =
read_head_and_flags(&path, HEAD_RECORD_LIMIT)
.await
.unwrap_or((Vec::new(), false, false));
// Apply filters: must have session meta and at least one user message event
if saw_session_meta && saw_user_event {
items.push(ConversationItem { path, head });
}
@@ -210,6 +198,23 @@ async fn traverse_directories_for_paths(
})
}
/// Pagination cursor token format: "<file_ts>|<uuid>" where `file_ts` matches the
/// filename timestamp portion (YYYY-MM-DDThh-mm-ss) used in rollout filenames.
/// The cursor orders files by timestamp desc, then UUID desc.
fn parse_cursor(token: &str) -> Option<Cursor> {
let (file_ts, uuid_str) = token.split_once('|')?;
let Ok(uuid) = Uuid::parse_str(uuid_str) else {
return None;
};
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
let ts = PrimitiveDateTime::parse(file_ts, format).ok()?.assume_utc();
Some(Cursor::new(ts, uuid))
}
fn build_next_cursor(items: &[ConversationItem]) -> Option<Cursor> {
let last = items.last()?;
let file_name = last.path.file_name()?.to_string_lossy();
@@ -217,12 +222,14 @@ fn build_next_cursor(items: &[ConversationItem]) -> Option<Cursor> {
Some(Cursor::new(ts, id))
}
/// Collects immediate subdirectories of `parent`, parses their (string) names with `parse`,
/// and returns them sorted descending by the parsed key.
async fn collect_dirs_desc<T, F>(parent: &Path, parse: F) -> io::Result<Vec<(T, PathBuf)>>
where
T: Ord + Copy,
F: Fn(&str) -> Option<T>,
{
let mut dir = fs::read_dir(parent).await?;
let mut dir = tokio::fs::read_dir(parent).await?;
let mut vec: Vec<(T, PathBuf)> = Vec::new();
while let Some(entry) = dir.next_entry().await? {
if entry
@@ -240,11 +247,12 @@ where
Ok(vec)
}
/// Collects files in a directory and parses them with `parse`.
async fn collect_files<T, F>(parent: &Path, parse: F) -> io::Result<Vec<T>>
where
F: Fn(&str, &Path) -> Option<T>,
{
let mut dir = fs::read_dir(parent).await?;
let mut dir = tokio::fs::read_dir(parent).await?;
let mut collected: Vec<T> = Vec::new();
while let Some(entry) = dir.next_entry().await? {
if entry
@@ -262,11 +270,15 @@ where
}
fn parse_timestamp_uuid_from_filename(name: &str) -> Option<(OffsetDateTime, Uuid)> {
// Expected: rollout-YYYY-MM-DDThh-mm-ss-<uuid>.jsonl
let core = name.strip_prefix("rollout-")?.strip_suffix(".jsonl")?;
// Scan from the right for a '-' such that the suffix parses as a UUID.
let (sep_idx, uuid) = core
.match_indices('-')
.rev()
.find_map(|(i, _)| Uuid::parse_str(&core[i + 1..]).ok().map(|u| (i, u)))?;
let ts_str = &core[..sep_idx];
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
@@ -274,23 +286,16 @@ fn parse_timestamp_uuid_from_filename(name: &str) -> Option<(OffsetDateTime, Uui
Some((ts, uuid))
}
fn parse_cursor(token: &str) -> Option<Cursor> {
let (file_ts, uuid_str) = token.split_once('|')?;
let uuid = Uuid::parse_str(uuid_str).ok()?;
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
let ts = PrimitiveDateTime::parse(file_ts, format).ok()?.assume_utc();
Some(Cursor::new(ts, uuid))
}
async fn read_head_and_flags(
path: &Path,
max_records: usize,
) -> io::Result<(Vec<Value>, bool, bool)> {
) -> io::Result<(Vec<serde_json::Value>, bool, bool)> {
use tokio::io::AsyncBufReadExt;
let file = tokio::fs::File::open(path).await?;
let reader = tokio::io::BufReader::new(file);
let mut lines = reader.lines();
let mut head: Vec<Value> = Vec::new();
let mut head: Vec<serde_json::Value> = Vec::new();
let mut saw_session_meta = false;
let mut saw_user_event = false;
@@ -317,7 +322,12 @@ async fn read_head_and_flags(
head.push(val);
}
}
RolloutItem::TurnContext(_) | RolloutItem::Compacted(_) => {}
RolloutItem::TurnContext(_) => {
// Not included in `head`; skip.
}
RolloutItem::Compacted(_) => {
// Not included in `head`; skip.
}
RolloutItem::EventMsg(ev) => {
if matches!(ev, EventMsg::UserMessage(_)) {
saw_user_event = true;
@@ -328,3 +338,48 @@ async fn read_head_and_flags(
Ok((head, saw_session_meta, saw_user_event))
}
/// Locate a recorded conversation rollout file by its UUID string using the existing
/// paginated listing implementation. Returns `Ok(Some(path))` if found, `Ok(None)` if not present
/// or the id is invalid.
pub async fn find_conversation_path_by_id_str(
codex_home: &Path,
id_str: &str,
) -> io::Result<Option<PathBuf>> {
// Validate UUID format early.
if Uuid::parse_str(id_str).is_err() {
return Ok(None);
}
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(None);
}
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let limit = NonZero::new(1).unwrap();
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let threads = NonZero::new(2).unwrap();
let cancel = Arc::new(AtomicBool::new(false));
let exclude: Vec<String> = Vec::new();
let compute_indices = false;
let results = file_search::run(
id_str,
limit,
&root,
exclude,
threads,
cancel,
compute_indices,
)
.map_err(|e| io::Error::other(format!("file search failed: {e}")))?;
Ok(results
.matches
.into_iter()
.next()
.map(|m| root.join(m.path)))
}

View File

@@ -1,29 +1,16 @@
use std::path::Path;
//! Rollout module: persistence and discovery of session rollout files.
use async_trait::async_trait;
use codex_agent::rollout::GitInfoCollector as SharedGitInfoCollector;
use codex_protocol::protocol::GitInfo;
pub const SESSIONS_SUBDIR: &str = "sessions";
pub const ARCHIVED_SESSIONS_SUBDIR: &str = "archived_sessions";
pub use codex_agent::rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use codex_agent::rollout::RolloutConfig;
pub use codex_agent::rollout::RolloutRecorder;
pub use codex_agent::rollout::RolloutRecorderParams;
pub use codex_agent::rollout::SESSIONS_SUBDIR;
pub mod list;
pub(crate) mod policy;
pub mod recorder;
pub mod list {
pub use codex_agent::rollout::list::*;
}
pub use codex_protocol::protocol::SessionMeta;
pub use list::find_conversation_path_by_id_str;
pub use recorder::RolloutRecorder;
pub use recorder::RolloutRecorderParams;
#[cfg(test)]
pub mod tests;
use crate::git_info::collect_git_info;
pub struct CoreGitInfoCollector;
#[async_trait]
impl SharedGitInfoCollector for CoreGitInfoCollector {
async fn collect(&self, cwd: &Path) -> Option<GitInfo> {
collect_git_info(cwd).await
}
}

View File

@@ -1,13 +1,14 @@
use crate::protocol::EventMsg;
use crate::protocol::RolloutItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
/// Whether a rollout `item` should be persisted in rollout files.
#[inline]
pub fn is_persisted_response_item(item: &RolloutItem) -> bool {
pub(crate) fn is_persisted_response_item(item: &RolloutItem) -> bool {
match item {
RolloutItem::ResponseItem(item) => should_persist_response_item(item),
RolloutItem::EventMsg(ev) => should_persist_event_msg(ev),
// Persist Codex executive markers so we can analyze flows (e.g., compaction, API turns).
RolloutItem::Compacted(_) | RolloutItem::TurnContext(_) | RolloutItem::SessionMeta(_) => {
true
}
@@ -16,7 +17,7 @@ pub fn is_persisted_response_item(item: &RolloutItem) -> bool {
/// Whether a `ResponseItem` should be persisted in rollout files.
#[inline]
pub fn should_persist_response_item(item: &ResponseItem) -> bool {
pub(crate) fn should_persist_response_item(item: &ResponseItem) -> bool {
match item {
ResponseItem::Message { .. }
| ResponseItem::Reasoning { .. }
@@ -32,7 +33,7 @@ pub fn should_persist_response_item(item: &ResponseItem) -> bool {
/// Whether an `EventMsg` should be persisted in rollout files.
#[inline]
pub fn should_persist_event_msg(ev: &EventMsg) -> bool {
pub(crate) fn should_persist_event_msg(ev: &EventMsg) -> bool {
match ev {
EventMsg::UserMessage(_)
| EventMsg::AgentMessage(_)
@@ -69,6 +70,7 @@ pub fn should_persist_event_msg(ev: &EventMsg) -> bool {
| EventMsg::ListCustomPromptsResponse(_)
| EventMsg::PlanUpdate(_)
| EventMsg::ShutdownComplete
| EventMsg::ConversationPath(_) => false,
| EventMsg::ConversationPath(_)
| EventMsg::WorktreeRemoved(_) => false,
}
}

View File

@@ -1,26 +1,19 @@
use std::fs;
//! Persist Codex session rollouts (.jsonl) so sessions can be replayed or inspected later.
use std::fs::File;
use std::fs::{self};
use std::io::Error as IoError;
use std::path::Path;
use std::path::PathBuf;
use std::sync::Arc;
use async_trait::async_trait;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::protocol::GitInfo;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::ResumedHistory;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
use serde_json::Value;
use time::OffsetDateTime;
use time::format_description::FormatItem;
use time::macros::format_description;
use tokio::io::AsyncWriteExt;
use tokio::sync::mpsc;
use tokio::sync::mpsc::Sender;
use tokio::sync::mpsc::{self};
use tokio::sync::oneshot;
use tracing::info;
use tracing::warn;
@@ -30,31 +23,35 @@ use super::list::ConversationsPage;
use super::list::Cursor;
use super::list::get_conversations;
use super::policy::is_persisted_response_item;
use crate::config::Config;
use crate::default_client::ORIGINATOR;
use crate::git_info::collect_git_info;
use codex_protocol::protocol::InitialHistory;
use codex_protocol::protocol::ResumedHistory;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::RolloutLine;
use codex_protocol::protocol::SessionMeta;
use codex_protocol::protocol::SessionMetaLine;
#[async_trait]
pub trait GitInfoCollector: Send + Sync {
async fn collect(&self, cwd: &Path) -> Option<GitInfo>;
}
#[derive(Clone)]
pub struct RolloutConfig {
pub codex_home: PathBuf,
pub originator: String,
pub cli_version: String,
pub git_info_collector: Option<Arc<dyn GitInfoCollector>>,
}
/// Records all [`ResponseItem`]s for a session and flushes them to disk after
/// every update.
///
/// Rollouts are recorded as JSONL and can be inspected with tools such as:
///
/// ```ignore
/// $ jq -C . ~/.codex/sessions/rollout-2025-05-07T17-24-21-5973b6c0-94b8-487b-a530-2aeb6098ae0e.jsonl
/// $ fx ~/.codex/sessions/rollout-2025-05-07T17-24-21-5973b6c0-94b8-487b-a530-2aeb6098ae0e.jsonl
/// ```
#[derive(Clone)]
pub struct RolloutRecorder {
tx: Sender<RolloutCmd>,
rollout_path: PathBuf,
pub(crate) rollout_path: PathBuf,
}
#[derive(Clone)]
pub enum RolloutRecorderParams {
Create {
conversation_id: ConversationId,
cwd: PathBuf,
instructions: Option<String>,
},
Resume {
@@ -64,19 +61,19 @@ pub enum RolloutRecorderParams {
enum RolloutCmd {
AddItems(Vec<RolloutItem>),
Flush { ack: oneshot::Sender<()> },
Shutdown { ack: oneshot::Sender<()> },
/// Ensure all prior writes are processed; respond when flushed.
Flush {
ack: oneshot::Sender<()>,
},
Shutdown {
ack: oneshot::Sender<()>,
},
}
impl RolloutRecorderParams {
pub fn new(
conversation_id: ConversationId,
cwd: PathBuf,
instructions: Option<String>,
) -> Self {
pub fn new(conversation_id: ConversationId, instructions: Option<String>) -> Self {
Self::Create {
conversation_id,
cwd,
instructions,
}
}
@@ -87,6 +84,7 @@ impl RolloutRecorderParams {
}
impl RolloutRecorder {
/// List conversations (rollout files) under the provided Codex home directory.
pub async fn list_conversations(
codex_home: &Path,
page_size: usize,
@@ -95,14 +93,13 @@ impl RolloutRecorder {
get_conversations(codex_home, page_size, cursor).await
}
pub async fn new(
config: &RolloutConfig,
params: RolloutRecorderParams,
) -> std::io::Result<Self> {
let (file, rollout_path, meta, cwd) = match params {
/// Attempt to create a new [`RolloutRecorder`]. If the sessions directory
/// cannot be created or the rollout file cannot be opened we return the
/// error so the caller can decide whether to disable persistence.
pub async fn new(config: &Config, params: RolloutRecorderParams) -> std::io::Result<Self> {
let (file, rollout_path, meta) = match params {
RolloutRecorderParams::Create {
conversation_id,
cwd,
instructions,
} => {
let LogFileInfo {
@@ -110,7 +107,7 @@ impl RolloutRecorder {
path,
conversation_id: session_id,
timestamp,
} = create_log_file(&config.codex_home, conversation_id)?;
} = create_log_file(config, conversation_id)?;
let timestamp_format: &[FormatItem] = format_description!(
"[year]-[month]-[day]T[hour]:[minute]:[second].[subsecond digits:3]Z"
@@ -120,16 +117,18 @@ impl RolloutRecorder {
.format(timestamp_format)
.map_err(|e| IoError::other(format!("failed to format timestamp: {e}")))?;
let meta = SessionMeta {
id: session_id,
timestamp,
cwd: cwd.clone(),
originator: config.originator.clone(),
cli_version: config.cli_version.clone(),
instructions,
};
(tokio::fs::File::from_std(file), path, Some(meta), Some(cwd))
(
tokio::fs::File::from_std(file),
path,
Some(SessionMeta {
id: session_id,
timestamp,
cwd: config.cwd.clone(),
originator: ORIGINATOR.value.clone(),
cli_version: env!("CARGO_PKG_VERSION").to_string(),
instructions,
}),
)
}
RolloutRecorderParams::Resume { path } => (
tokio::fs::OpenOptions::new()
@@ -138,21 +137,31 @@ impl RolloutRecorder {
.await?,
path,
None,
None,
),
};
let (tx, rx) = mpsc::channel::<RolloutCmd>(256);
let collector = config.git_info_collector.clone();
// Clone the cwd for the spawned task to collect git info asynchronously
let cwd = config.cwd.clone();
tokio::task::spawn(rollout_writer(file, rx, meta, cwd, collector));
// A reasonably-sized bounded channel. If the buffer fills up the send
// future will yield, which is fine we only need to ensure we do not
// perform *blocking* I/O on the caller's thread.
let (tx, rx) = mpsc::channel::<RolloutCmd>(256);
// Spawn a Tokio task that owns the file handle and performs async
// writes. Using `tokio::fs::File` keeps everything on the async I/O
// driver instead of blocking the runtime.
tokio::task::spawn(rollout_writer(file, rx, meta, cwd));
Ok(Self { tx, rollout_path })
}
pub async fn record_items(&self, items: &[RolloutItem]) -> std::io::Result<()> {
pub(crate) async fn record_items(&self, items: &[RolloutItem]) -> std::io::Result<()> {
let mut filtered = Vec::new();
for item in items {
// Note that function calls may look a bit strange if they are
// "fully qualified MCP tool calls," so we could consider
// reformatting them in that case.
if is_persisted_response_item(item) {
filtered.push(item.clone());
}
@@ -166,6 +175,7 @@ impl RolloutRecorder {
.map_err(|e| IoError::other(format!("failed to queue rollout items: {e}")))
}
/// Flush all queued writes and wait until they are committed by the writer task.
pub async fn flush(&self) -> std::io::Result<()> {
let (tx, rx) = oneshot::channel();
self.tx
@@ -176,26 +186,7 @@ impl RolloutRecorder {
.map_err(|e| IoError::other(format!("failed waiting for rollout flush: {e}")))
}
pub async fn shutdown(&self) -> std::io::Result<()> {
let (tx_done, rx_done) = oneshot::channel();
match self.tx.send(RolloutCmd::Shutdown { ack: tx_done }).await {
Ok(_) => rx_done
.await
.map_err(|e| IoError::other(format!("failed waiting for rollout shutdown: {e}"))),
Err(e) => {
warn!("failed to send rollout shutdown command: {e}");
Err(IoError::other(format!(
"failed to send rollout shutdown command: {e}"
)))
}
}
}
pub fn get_rollout_path(&self) -> PathBuf {
self.rollout_path.clone()
}
pub async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
pub(crate) async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
info!("Resuming rollout from {path:?}");
let text = tokio::fs::read_to_string(path).await?;
if text.trim().is_empty() {
@@ -216,17 +207,33 @@ impl RolloutRecorder {
}
};
// Parse the rollout line structure
match serde_json::from_value::<RolloutLine>(v.clone()) {
Ok(rollout_line) => match rollout_line.item {
RolloutItem::SessionMeta(session_meta_line) => {
// Use the FIRST SessionMeta encountered in the file as the canonical
// conversation id and main session information. Keep all items intact.
if conversation_id.is_none() {
conversation_id = Some(session_meta_line.meta.id);
}
items.push(RolloutItem::SessionMeta(session_meta_line));
}
other => items.push(other),
RolloutItem::ResponseItem(item) => {
items.push(RolloutItem::ResponseItem(item));
}
RolloutItem::Compacted(item) => {
items.push(RolloutItem::Compacted(item));
}
RolloutItem::TurnContext(item) => {
items.push(RolloutItem::TurnContext(item));
}
RolloutItem::EventMsg(_ev) => {
items.push(RolloutItem::EventMsg(_ev));
}
},
Err(e) => warn!("failed to parse rollout line: {v:?}, error: {e}"),
Err(e) => {
warn!("failed to parse rollout line: {v:?}, error: {e}");
}
}
}
@@ -249,28 +256,57 @@ impl RolloutRecorder {
rollout_path: path.to_path_buf(),
}))
}
pub(crate) fn get_rollout_path(&self) -> PathBuf {
self.rollout_path.clone()
}
pub async fn shutdown(&self) -> std::io::Result<()> {
let (tx_done, rx_done) = oneshot::channel();
match self.tx.send(RolloutCmd::Shutdown { ack: tx_done }).await {
Ok(_) => rx_done
.await
.map_err(|e| IoError::other(format!("failed waiting for rollout shutdown: {e}"))),
Err(e) => {
warn!("failed to send rollout shutdown command: {e}");
Err(IoError::other(format!(
"failed to send rollout shutdown command: {e}"
)))
}
}
}
}
struct LogFileInfo {
/// Opened file handle to the rollout file.
file: File,
/// Full path to the rollout file.
path: PathBuf,
/// Session ID (also embedded in filename).
conversation_id: ConversationId,
/// Timestamp for the start of the session.
timestamp: OffsetDateTime,
}
fn create_log_file(
codex_home: &Path,
config: &Config,
conversation_id: ConversationId,
) -> std::io::Result<LogFileInfo> {
// Resolve ~/.codex/sessions/YYYY/MM/DD and create it if missing.
let timestamp = OffsetDateTime::now_local()
.map_err(|e| IoError::other(format!("failed to get local time: {e}")))?;
let mut dir = codex_home.to_path_buf();
let mut dir = config.codex_home.clone();
dir.push(SESSIONS_SUBDIR);
dir.push(timestamp.year().to_string());
dir.push(format!("{:02}", u8::from(timestamp.month())));
dir.push(format!("{:02}", timestamp.day()));
fs::create_dir_all(&dir)?;
// Custom format for YYYY-MM-DDThh-mm-ss. Use `-` instead of `:` for
// compatibility with filesystems that do not allow colons in filenames.
let format: &[FormatItem] =
format_description!("[year]-[month]-[day]T[hour]-[minute]-[second]");
let date_str = timestamp
@@ -278,6 +314,7 @@ fn create_log_file(
.map_err(|e| IoError::other(format!("failed to format timestamp: {e}")))?;
let filename = format!("rollout-{date_str}-{conversation_id}.jsonl");
let path = dir.join(filename);
let file = std::fs::OpenOptions::new()
.append(true)
@@ -296,27 +333,25 @@ async fn rollout_writer(
file: tokio::fs::File,
mut rx: mpsc::Receiver<RolloutCmd>,
mut meta: Option<SessionMeta>,
cwd: Option<PathBuf>,
git_info_collector: Option<Arc<dyn GitInfoCollector>>,
cwd: std::path::PathBuf,
) -> std::io::Result<()> {
let mut writer = JsonlWriter { file };
// If we have a meta, collect git info asynchronously and write meta first
if let Some(session_meta) = meta.take() {
let git_info =
if let (Some(provider), Some(cwd)) = (git_info_collector.as_ref(), cwd.as_ref()) {
provider.collect(cwd.as_path()).await
} else {
None
};
let git_info = collect_git_info(&cwd).await;
let session_meta_line = SessionMetaLine {
meta: session_meta,
git: git_info,
};
// Write the SessionMeta as the first item in the file, wrapped in a rollout line
writer
.write_rollout_item(RolloutItem::SessionMeta(session_meta_line))
.await?;
}
// Process rollout commands
while let Some(cmd) = rx.recv().await {
match cmd {
RolloutCmd::AddItems(items) => {
@@ -327,6 +362,7 @@ async fn rollout_writer(
}
}
RolloutCmd::Flush { ack } => {
// Ensure underlying file is flushed and then ack.
if let Err(e) = writer.file.flush().await {
let _ = ack.send(());
return Err(e);
@@ -361,14 +397,11 @@ impl JsonlWriter {
};
self.write_line(&line).await
}
async fn write_line(&mut self, item: &impl serde::Serialize) -> std::io::Result<()> {
let mut buf = serde_json::to_vec(item)
.map_err(|e| IoError::other(format!("failed to serialise rollout line: {e}")))?;
buf.push(b'\n');
self.file
.write_all(&buf)
.await
.map_err(|e| IoError::other(format!("failed to write rollout line: {e}")))
let mut json = serde_json::to_string(item)?;
json.push('\n');
self.file.write_all(json.as_bytes()).await?;
self.file.flush().await?;
Ok(())
}
}

View File

@@ -1,14 +1,17 @@
use std::collections::HashSet;
use std::path::Component;
use std::path::Path;
use std::path::PathBuf;
use codex_apply_patch::ApplyPatchAction;
use codex_protocol::protocol::AskForApproval;
use codex_protocol::protocol::SandboxPolicy;
use codex_apply_patch::ApplyPatchFileChange;
use crate::exec::SandboxType;
use crate::command_safety::is_dangerous_command::command_might_be_dangerous;
use crate::command_safety::is_safe_command::is_known_safe_command;
use crate::sandbox::SandboxType;
use crate::protocol::AskForApproval;
use crate::protocol::SandboxPolicy;
#[derive(Debug, PartialEq)]
pub enum SafetyCheck {
@@ -86,8 +89,15 @@ pub fn assess_command_safety(
) -> SafetyCheck {
// Some commands look dangerous. Even if they are run inside a sandbox,
// unless the user has explicitly approved them, we should ask,
// regardless of the approval policy and sandbox policy.
// or reject if the approval_policy tells us not to ask.
if command_might_be_dangerous(command) && !approved.contains(command) {
if approval_policy == AskForApproval::Never {
return SafetyCheck::Reject {
reason: "dangerous command detected; rejected by user approval settings"
.to_string(),
};
}
return SafetyCheck::AskUser;
}
@@ -196,196 +206,81 @@ fn is_write_patch_constrained_to_writable_paths(
SandboxPolicy::DangerFullAccess => {
return true;
}
SandboxPolicy::WorkspaceWrite {
writable_roots,
exclude_slash_tmp: _exclude_slash_tmp,
exclude_tmpdir_env_var: _exclude_tmpdir,
network_access: _network_access,
} => writable_roots,
SandboxPolicy::WorkspaceWrite { .. } => sandbox_policy.get_writable_roots_with_cwd(cwd),
};
// If the policy allows writes outside the workspace (DangerFullAccess),
// we've already returned true above. At this point we only have
// `WorkspaceWrite`, which includes the cwd implicitly, so first check if
// the patch fully lives within the cwd. If it does then we're fine.
let workspace_root = cwd.canonicalize().unwrap_or_else(|_| cwd.to_path_buf());
if all_changes_within_root(action, &workspace_root) {
return true;
}
if writable_roots.is_empty() {
return false;
}
// When `/tmp` is excluded, filter it out of writable roots. Some patch commands write
// temporary files there even for workspace-only updates.
let mut writable_roots: Vec<&PathBuf> = writable_roots.iter().collect();
if matches!(
sandbox_policy,
SandboxPolicy::WorkspaceWrite {
exclude_slash_tmp: true,
..
// Normalize a path by removing `.` and resolving `..` without touching the
// filesystem (works even if the file does not exist).
fn normalize(path: &Path) -> Option<PathBuf> {
let mut out = PathBuf::new();
for comp in path.components() {
match comp {
Component::ParentDir => {
out.pop();
}
Component::CurDir => { /* skip */ }
other => out.push(other.as_os_str()),
}
}
) {
writable_roots.retain(|path| !path.as_path().starts_with("/tmp"));
Some(out)
}
let mut all_within_declared_root = true;
for change in action.changes() {
match change.0.strip_prefix(&workspace_root) {
Ok(relative_path) => {
if !is_within_any_root(relative_path, &writable_roots) {
all_within_declared_root = false;
break;
// Determine whether `path` is inside **any** writable root. Both `path`
// and roots are converted to absolute, normalized forms before the
// prefix check.
let is_path_writable = |p: &PathBuf| {
let abs = if p.is_absolute() {
p.clone()
} else {
cwd.join(p)
};
let abs = match normalize(&abs) {
Some(v) => v,
None => return false,
};
writable_roots
.iter()
.any(|writable_root| writable_root.is_path_writable(&abs))
};
for (path, change) in action.changes() {
match change {
ApplyPatchFileChange::Add { .. } | ApplyPatchFileChange::Delete { .. } => {
if !is_path_writable(path) {
return false;
}
}
Err(_) => {
all_within_declared_root = false;
break;
ApplyPatchFileChange::Update { move_path, .. } => {
if !is_path_writable(path) {
return false;
}
if let Some(dest) = move_path
&& !is_path_writable(dest)
{
return false;
}
}
}
}
all_within_declared_root
true
}
fn all_changes_within_root(action: &ApplyPatchAction, root: &Path) -> bool {
action
.changes()
.iter()
.all(|(path, _)| path.starts_with(root))
}
fn is_within_any_root(path: &Path, roots: &[&PathBuf]) -> bool {
roots.iter().any(|root| path.starts_with(root.as_path()))
}
#[cfg(any())]
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[test]
fn reject_empty_patch() {
let action = ApplyPatchAction::new_for_test(vec![]);
let sandbox_policy = SandboxPolicy::ReadOnly;
let cwd = Path::new(".");
assert_eq!(
assess_patch_safety(&action, AskForApproval::OnRequest, &sandbox_policy, cwd),
SafetyCheck::Reject {
reason: "empty patch".to_string(),
}
);
}
#[test]
fn auto_allow_patch_in_workspace_write_sandbox() {
let patch_action = ApplyPatchAction::new_for_test(vec![ApplyPatchFileChange::new_update(
PathBuf::from("src/main.rs"),
"diff --git a/src/main.rs b/src/main.rs\n".to_string(),
None,
"".to_string(),
)]);
let sandbox_policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
};
assert_eq!(
assess_patch_safety(
&patch_action,
AskForApproval::OnRequest,
&sandbox_policy,
Path::new("."),
),
SafetyCheck::AutoApprove {
sandbox_type: get_platform_sandbox().unwrap_or(SandboxType::None),
}
);
}
#[test]
fn reject_patch_if_policy_is_never_and_writes_outside_of_workspace() {
let patch_action = ApplyPatchAction::new_for_test(vec![ApplyPatchFileChange::new_update(
PathBuf::from("../outside_file.txt"),
"diff --git a/../outside_file.txt b/../outside_file.txt\n".to_string(),
None,
"".to_string(),
)]);
let sandbox_policy = SandboxPolicy::WorkspaceWrite {
writable_roots: vec![],
network_access: false,
exclude_tmpdir_env_var: false,
exclude_slash_tmp: false,
};
assert_eq!(
assess_patch_safety(
&patch_action,
AskForApproval::Never,
&sandbox_policy,
Path::new("."),
),
SafetyCheck::Reject {
reason: "writing outside of the project; rejected by user approval settings"
.to_string(),
}
);
}
#[test]
fn assess_command_safety_known_safe_command() {
let command = vec!["ls".to_string()];
let approval_policy = AskForApproval::OnRequest;
let sandbox_policy = SandboxPolicy::ReadOnly;
let approved = HashSet::new();
let request_escalated_privileges = false;
let safety_check = assess_command_safety(
&command,
approval_policy,
&sandbox_policy,
&approved,
request_escalated_privileges,
);
assert_eq!(
safety_check,
SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None
}
);
}
#[test]
fn assess_command_safety_dangerous_command_to_reject() {
let command = vec!["rm".to_string(), "-rf".to_string(), "/".to_string()];
let approval_policy = AskForApproval::OnRequest;
let sandbox_policy = SandboxPolicy::ReadOnly;
let approved = HashSet::new();
let request_escalated_privileges = false;
let safety_check = assess_command_safety(
&command,
approval_policy,
&sandbox_policy,
&approved,
request_escalated_privileges,
);
assert_eq!(safety_check, SafetyCheck::AskUser);
}
#[test]
fn patch_within_declared_root() {
let tempdir = tempfile::tempdir().unwrap();
let cwd = tempdir.path().to_path_buf();
fn test_writable_roots_constraint() {
// Use a temporary directory as our workspace to avoid touching
// the real current working directory.
let tmp = TempDir::new().unwrap();
let cwd = tmp.path().to_path_buf();
let parent = cwd.parent().unwrap().to_path_buf();
// Helper to build a singleentry patch that adds a file at `p`.
let make_add_change = |p: PathBuf| ApplyPatchAction::new_add_for_test(&p, "".to_string());
let add_inside = make_add_change(cwd.join("inner.txt"));
@@ -488,7 +383,13 @@ mod tests {
request_escalated_privileges,
);
assert_eq!(safety_check, SafetyCheck::AskUser);
assert_eq!(
safety_check,
SafetyCheck::Reject {
reason: "dangerous command detected; rejected by user approval settings"
.to_string(),
}
);
}
#[test]

View File

@@ -1,33 +0,0 @@
use std::collections::HashMap;
use std::env;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
use codex_agent::apply_patch::ApplyPatchExec;
use codex_agent::apply_patch::CODEX_APPLY_PATCH_ARG1;
pub(crate) fn build_exec_params_for_apply_patch(
exec: &ApplyPatchExec,
original: &ExecParams,
) -> Result<ExecParams, FunctionCallError> {
let path_to_codex = env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string())
.ok_or_else(|| {
FunctionCallError::RespondToModel(
"failed to determine path to codex executable".to_string(),
)
})?;
let patch = exec.action.patch.clone();
Ok(ExecParams {
command: vec![path_to_codex, CODEX_APPLY_PATCH_ARG1.to_string(), patch],
cwd: exec.action.cwd.clone(),
timeout_ms: original.timeout_ms,
// Run apply_patch with a minimal environment for determinism and to
// avoid leaking host environment variables into the patch process.
env: HashMap::new(),
with_escalated_permissions: original.with_escalated_permissions,
justification: original.justification.clone(),
})
}

View File

@@ -1,87 +0,0 @@
use std::path::Path;
use std::path::PathBuf;
use async_trait::async_trait;
use crate::error::Result;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec::process_exec_tool_call;
use crate::protocol::SandboxPolicy;
#[async_trait]
pub trait SpawnBackend: Send + Sync {
fn sandbox_type(&self) -> SandboxType;
async fn spawn(
&self,
params: ExecParams,
sandbox_policy: &SandboxPolicy,
sandbox_cwd: &Path,
codex_linux_sandbox_exe: &Option<PathBuf>,
stdout_stream: Option<StdoutStream>,
) -> Result<ExecToolCallOutput> {
process_exec_tool_call(
params,
self.sandbox_type(),
sandbox_policy,
sandbox_cwd,
codex_linux_sandbox_exe,
stdout_stream,
)
.await
}
}
#[derive(Clone, Copy, Debug, Default)]
pub struct DirectBackend;
#[async_trait]
impl SpawnBackend for DirectBackend {
fn sandbox_type(&self) -> SandboxType {
SandboxType::None
}
}
#[derive(Clone, Copy, Debug, Default)]
pub struct SeatbeltBackend;
#[async_trait]
impl SpawnBackend for SeatbeltBackend {
fn sandbox_type(&self) -> SandboxType {
SandboxType::MacosSeatbelt
}
}
#[derive(Clone, Copy, Debug, Default)]
pub struct LinuxBackend;
#[async_trait]
impl SpawnBackend for LinuxBackend {
fn sandbox_type(&self) -> SandboxType {
SandboxType::LinuxSeccomp
}
}
#[derive(Default)]
pub struct BackendRegistry {
direct: DirectBackend,
seatbelt: SeatbeltBackend,
linux: LinuxBackend,
}
impl BackendRegistry {
pub fn new() -> Self {
Self::default()
}
pub fn for_type(&self, sandbox: SandboxType) -> &dyn SpawnBackend {
match sandbox {
SandboxType::None => &self.direct,
SandboxType::MacosSeatbelt => &self.seatbelt,
SandboxType::LinuxSeccomp => &self.linux,
}
}
}

View File

@@ -1,51 +0,0 @@
mod apply_patch_adapter;
mod backend;
mod planner;
pub use backend::BackendRegistry;
pub use backend::DirectBackend;
pub use backend::LinuxBackend;
pub use backend::SeatbeltBackend;
pub use backend::SpawnBackend;
pub use planner::ExecPlan;
pub use planner::ExecRequest;
pub use planner::PatchExecRequest;
pub(crate) use planner::PreparedExec;
pub use planner::plan_apply_patch;
pub use planner::plan_exec;
pub(crate) use planner::prepare_exec_invocation;
use crate::error::Result;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::StdoutStream;
use crate::protocol::SandboxPolicy;
pub struct ExecRuntimeContext<'a> {
pub sandbox_policy: &'a SandboxPolicy,
pub sandbox_cwd: &'a std::path::Path,
pub codex_linux_sandbox_exe: &'a Option<std::path::PathBuf>,
pub stdout_stream: Option<StdoutStream>,
}
pub async fn run_with_plan(
params: ExecParams,
plan: &ExecPlan,
registry: &BackendRegistry,
runtime_ctx: &ExecRuntimeContext<'_>,
) -> Result<ExecToolCallOutput> {
let ExecPlan::Approved { sandbox, .. } = plan else {
unreachable!("run_with_plan called without approved plan");
};
registry
.for_type(*sandbox)
.spawn(
params,
runtime_ctx.sandbox_policy,
runtime_ctx.sandbox_cwd,
runtime_ctx.codex_linux_sandbox_exe,
runtime_ctx.stdout_stream.clone(),
)
.await
}

View File

@@ -1,217 +0,0 @@
use std::collections::HashSet;
use std::path::Path;
use codex_agent::apply_patch::ApplyPatchExec;
use codex_agent::safety::SafetyCheck;
use codex_agent::safety::assess_command_safety;
use codex_agent::safety::assess_patch_safety;
use codex_agent::sandbox::SandboxType;
use codex_agent::services::ApprovalCoordinator;
use codex_apply_patch::ApplyPatchAction;
use super::apply_patch_adapter::build_exec_params_for_apply_patch;
use crate::codex::TurnContext;
use crate::exec::ExecParams;
use crate::function_tool::FunctionCallError;
use crate::protocol::AskForApproval;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
#[derive(Clone, Debug)]
pub struct ExecRequest<'a> {
pub params: &'a ExecParams,
pub approval: AskForApproval,
pub policy: &'a SandboxPolicy,
pub approved_session_commands: &'a HashSet<Vec<String>>,
}
#[derive(Clone, Debug)]
pub enum ExecPlan {
Reject {
reason: String,
},
AskUser {
reason: Option<String>,
},
Approved {
sandbox: SandboxType,
on_failure_escalate: bool,
approved_by_user: bool,
},
}
impl ExecPlan {
pub fn approved(
sandbox: SandboxType,
on_failure_escalate: bool,
approved_by_user: bool,
) -> Self {
ExecPlan::Approved {
sandbox,
on_failure_escalate,
approved_by_user,
}
}
}
pub fn plan_exec(req: &ExecRequest<'_>) -> ExecPlan {
let params = req.params;
let with_escalated_permissions = params.with_escalated_permissions.unwrap_or(false);
let safety = assess_command_safety(
&params.command,
req.approval,
req.policy,
req.approved_session_commands,
with_escalated_permissions,
);
match safety {
SafetyCheck::AutoApprove { sandbox_type } => ExecPlan::Approved {
sandbox: sandbox_type,
on_failure_escalate: should_escalate_on_failure(req.approval, sandbox_type),
approved_by_user: false,
},
SafetyCheck::AskUser => ExecPlan::AskUser {
reason: params.justification.clone(),
},
SafetyCheck::Reject { reason } => ExecPlan::Reject { reason },
}
}
#[derive(Clone, Debug)]
pub struct PatchExecRequest<'a> {
pub action: &'a ApplyPatchAction,
pub approval: AskForApproval,
pub policy: &'a SandboxPolicy,
pub cwd: &'a Path,
pub user_explicitly_approved: bool,
}
pub fn plan_apply_patch(req: &PatchExecRequest<'_>) -> ExecPlan {
if req.user_explicitly_approved {
ExecPlan::Approved {
sandbox: SandboxType::None,
on_failure_escalate: false,
approved_by_user: true,
}
} else {
match assess_patch_safety(req.action, req.approval, req.policy, req.cwd) {
SafetyCheck::AutoApprove { sandbox_type } => ExecPlan::Approved {
sandbox: sandbox_type,
on_failure_escalate: should_escalate_on_failure(req.approval, sandbox_type),
approved_by_user: false,
},
SafetyCheck::AskUser => ExecPlan::AskUser { reason: None },
SafetyCheck::Reject { reason } => ExecPlan::Reject { reason },
}
}
}
#[derive(Debug)]
pub(crate) struct PreparedExec {
pub(crate) params: ExecParams,
pub(crate) plan: ExecPlan,
pub(crate) command_for_display: Vec<String>,
pub(crate) apply_patch_exec: Option<ApplyPatchExec>,
}
pub(crate) async fn prepare_exec_invocation(
approvals: &dyn ApprovalCoordinator,
turn_context: &TurnContext,
sub_id: &str,
call_id: &str,
params: ExecParams,
apply_patch_exec: Option<ApplyPatchExec>,
approved_session_commands: HashSet<Vec<String>>,
) -> Result<PreparedExec, FunctionCallError> {
let mut params = params;
let (plan, command_for_display) = if let Some(exec) = apply_patch_exec.as_ref() {
params = build_exec_params_for_apply_patch(exec, &params)?;
let command_for_display = vec!["apply_patch".to_string(), exec.action.patch.clone()];
let plan_req = PatchExecRequest {
action: &exec.action,
approval: turn_context.approval_policy,
policy: &turn_context.sandbox_policy,
cwd: &turn_context.cwd,
user_explicitly_approved: exec.user_explicitly_approved_this_action,
};
let plan = match plan_apply_patch(&plan_req) {
plan @ ExecPlan::Approved { .. } => plan,
ExecPlan::AskUser { .. } => {
return Err(FunctionCallError::RespondToModel(
"patch requires approval but none was recorded".to_string(),
));
}
ExecPlan::Reject { reason } => {
return Err(FunctionCallError::RespondToModel(format!(
"patch rejected: {reason}"
)));
}
};
(plan, command_for_display)
} else {
let command_for_display = params.command.clone();
let initial_plan = plan_exec(&ExecRequest {
params: &params,
approval: turn_context.approval_policy,
policy: &turn_context.sandbox_policy,
approved_session_commands: &approved_session_commands,
});
let plan = match initial_plan {
plan @ ExecPlan::Approved { .. } => plan,
ExecPlan::AskUser { reason } => {
let decision = approvals
.request_command_approval(
sub_id.to_string(),
call_id.to_string(),
params.command.clone(),
params.cwd.clone(),
reason,
)
.await;
match decision {
ReviewDecision::Approved => ExecPlan::approved(SandboxType::None, false, true),
ReviewDecision::ApprovedForSession => {
approvals.add_approved_command(params.command.clone()).await;
ExecPlan::approved(SandboxType::None, false, true)
}
ReviewDecision::Denied | ReviewDecision::Abort => {
return Err(FunctionCallError::RespondToModel(
"exec command rejected by user".to_string(),
));
}
}
}
ExecPlan::Reject { reason } => {
return Err(FunctionCallError::RespondToModel(format!(
"exec command rejected: {reason:?}"
)));
}
};
(plan, command_for_display)
};
Ok(PreparedExec {
params,
plan,
command_for_display,
apply_patch_exec,
})
}
fn should_escalate_on_failure(approval: AskForApproval, sandbox: SandboxType) -> bool {
matches!(
(approval, sandbox),
(
AskForApproval::UnlessTrusted | AskForApproval::OnFailure,
SandboxType::MacosSeatbelt | SandboxType::LinuxSeccomp
)
)
}

View File

@@ -1 +1,571 @@
pub use codex_agent::shell::*;
use serde::Deserialize;
use serde::Serialize;
use shlex;
use std::path::PathBuf;
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct ZshShell {
pub(crate) shell_path: String,
pub(crate) zshrc_path: String,
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct BashShell {
pub(crate) shell_path: String,
pub(crate) bashrc_path: String,
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct PowerShellConfig {
pub(crate) exe: String, // Executable name or path, e.g. "pwsh" or "powershell.exe".
pub(crate) bash_exe_fallback: Option<PathBuf>, // In case the model generates a bash command.
}
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub enum Shell {
Zsh(ZshShell),
Bash(BashShell),
PowerShell(PowerShellConfig),
Unknown,
}
impl Shell {
pub fn format_default_shell_invocation(&self, command: Vec<String>) -> Option<Vec<String>> {
match self {
Shell::Zsh(zsh) => format_shell_invocation_with_rc(
command.as_slice(),
&zsh.shell_path,
&zsh.zshrc_path,
),
Shell::Bash(bash) => format_shell_invocation_with_rc(
command.as_slice(),
&bash.shell_path,
&bash.bashrc_path,
),
Shell::PowerShell(ps) => {
// If model generated a bash command, prefer a detected bash fallback
if let Some(script) = strip_bash_lc(command.as_slice()) {
return match &ps.bash_exe_fallback {
Some(bash) => Some(vec![
bash.to_string_lossy().to_string(),
"-lc".to_string(),
script,
]),
// No bash fallback → run the script under PowerShell.
// It will likely fail (except for some simple commands), but the error
// should give a clue to the model to fix upon retry that it's running under PowerShell.
None => Some(vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
script,
]),
};
}
// Not a bash command. If model did not generate a PowerShell command,
// turn it into a PowerShell command.
let first = command.first().map(String::as_str);
if first != Some(ps.exe.as_str()) {
// TODO (CODEX_2900): Handle escaping newlines.
if command.iter().any(|a| a.contains('\n') || a.contains('\r')) {
return Some(command);
}
let joined = shlex::try_join(command.iter().map(String::as_str)).ok();
return joined.map(|arg| {
vec![
ps.exe.clone(),
"-NoProfile".to_string(),
"-Command".to_string(),
arg,
]
});
}
// Model generated a PowerShell command. Run it.
Some(command)
}
Shell::Unknown => None,
}
}
pub fn name(&self) -> Option<String> {
match self {
Shell::Zsh(zsh) => std::path::Path::new(&zsh.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::Bash(bash) => std::path::Path::new(&bash.shell_path)
.file_name()
.map(|s| s.to_string_lossy().to_string()),
Shell::PowerShell(ps) => Some(ps.exe.clone()),
Shell::Unknown => None,
}
}
}
fn format_shell_invocation_with_rc(
command: &[String],
shell_path: &str,
rc_path: &str,
) -> Option<Vec<String>> {
let joined = strip_bash_lc(command)
.or_else(|| shlex::try_join(command.iter().map(String::as_str)).ok())?;
let rc_command = if std::path::Path::new(rc_path).exists() {
format!("source {rc_path} && ({joined})")
} else {
joined
};
Some(vec![shell_path.to_string(), "-lc".to_string(), rc_command])
}
fn strip_bash_lc(command: &[String]) -> Option<String> {
match command {
// exactly three items
[first, second, third]
// first two must be "bash", "-lc"
if first == "bash" && second == "-lc" =>
{
Some(third.clone())
}
_ => None,
}
}
#[cfg(unix)]
fn detect_default_user_shell() -> Shell {
use libc::getpwuid;
use libc::getuid;
use std::ffi::CStr;
unsafe {
let uid = getuid();
let pw = getpwuid(uid);
if !pw.is_null() {
let shell_path = CStr::from_ptr((*pw).pw_shell)
.to_string_lossy()
.into_owned();
let home_path = CStr::from_ptr((*pw).pw_dir).to_string_lossy().into_owned();
if shell_path.ends_with("/zsh") {
return Shell::Zsh(ZshShell {
shell_path,
zshrc_path: format!("{home_path}/.zshrc"),
});
}
if shell_path.ends_with("/bash") {
return Shell::Bash(BashShell {
shell_path,
bashrc_path: format!("{home_path}/.bashrc"),
});
}
}
}
Shell::Unknown
}
#[cfg(unix)]
pub async fn default_user_shell() -> Shell {
detect_default_user_shell()
}
#[cfg(target_os = "windows")]
pub async fn default_user_shell() -> Shell {
use tokio::process::Command;
// Prefer PowerShell 7+ (`pwsh`) if available, otherwise fall back to Windows PowerShell.
let has_pwsh = Command::new("pwsh")
.arg("-NoLogo")
.arg("-NoProfile")
.arg("-Command")
.arg("$PSVersionTable.PSVersion.Major")
.output()
.await
.map(|o| o.status.success())
.unwrap_or(false);
let bash_exe = if Command::new("bash.exe")
.arg("--version")
.output()
.await
.ok()
.map(|o| o.status.success())
.unwrap_or(false)
{
which::which("bash.exe").ok()
} else {
None
};
if has_pwsh {
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: bash_exe,
})
} else {
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: bash_exe,
})
}
}
#[cfg(all(not(target_os = "windows"), not(unix)))]
pub async fn default_user_shell() -> Shell {
Shell::Unknown
}
#[cfg(test)]
#[cfg(unix)]
mod tests {
use super::*;
use std::process::Command;
use std::string::ToString;
#[tokio::test]
async fn test_current_shell_detects_zsh() {
let shell = Command::new("sh")
.arg("-c")
.arg("echo $SHELL")
.output()
.unwrap();
let home = std::env::var("HOME").unwrap();
let shell_path = String::from_utf8_lossy(&shell.stdout).trim().to_string();
if shell_path.ends_with("/zsh") {
assert_eq!(
default_user_shell().await,
Shell::Zsh(ZshShell {
shell_path: shell_path.to_string(),
zshrc_path: format!("{home}/.zshrc",),
})
);
}
}
#[tokio::test]
async fn test_run_with_profile_zshrc_not_exists() {
let shell = Shell::Zsh(ZshShell {
shell_path: "/bin/zsh".to_string(),
zshrc_path: "/does/not/exist/.zshrc".to_string(),
});
let actual_cmd = shell.format_default_shell_invocation(vec!["myecho".to_string()]);
assert_eq!(
actual_cmd,
Some(vec![
"/bin/zsh".to_string(),
"-lc".to_string(),
"myecho".to_string()
])
);
}
#[tokio::test]
async fn test_run_with_profile_bashrc_not_exists() {
let shell = Shell::Bash(BashShell {
shell_path: "/bin/bash".to_string(),
bashrc_path: "/does/not/exist/.bashrc".to_string(),
});
let actual_cmd = shell.format_default_shell_invocation(vec!["myecho".to_string()]);
assert_eq!(
actual_cmd,
Some(vec![
"/bin/bash".to_string(),
"-lc".to_string(),
"myecho".to_string()
])
);
}
#[tokio::test]
async fn test_run_with_profile_bash_escaping_and_execution() {
let shell_path = "/bin/bash";
let cases = vec![
(
vec!["myecho"],
vec![shell_path, "-lc", "source BASHRC_PATH && (myecho)"],
Some("It works!\n"),
),
(
vec!["bash", "-lc", "echo 'single' \"double\""],
vec![
shell_path,
"-lc",
"source BASHRC_PATH && (echo 'single' \"double\")",
],
Some("single double\n"),
),
];
for (input, expected_cmd, expected_output) in cases {
use std::collections::HashMap;
use crate::exec::ExecParams;
use crate::exec::SandboxType;
use crate::exec::process_exec_tool_call;
use crate::protocol::SandboxPolicy;
let temp_home = tempfile::tempdir().unwrap();
let bashrc_path = temp_home.path().join(".bashrc");
std::fs::write(
&bashrc_path,
r#"
set -x
function myecho {
echo 'It works!'
}
"#,
)
.unwrap();
let shell = Shell::Bash(BashShell {
shell_path: shell_path.to_string(),
bashrc_path: bashrc_path.to_str().unwrap().to_string(),
});
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(ToString::to_string).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| s.replace("BASHRC_PATH", bashrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));
let output = process_exec_tool_call(
ExecParams {
command: actual_cmd.unwrap(),
cwd: PathBuf::from(temp_home.path()),
timeout_ms: None,
env: HashMap::from([(
"HOME".to_string(),
temp_home.path().to_str().unwrap().to_string(),
)]),
with_escalated_permissions: None,
justification: None,
},
SandboxType::None,
&SandboxPolicy::DangerFullAccess,
temp_home.path(),
&None,
None,
)
.await
.unwrap();
assert_eq!(output.exit_code, 0, "input: {input:?} output: {output:?}");
if let Some(expected) = expected_output {
assert_eq!(
output.stdout.text, expected,
"input: {input:?} output: {output:?}"
);
}
}
}
}
#[cfg(test)]
#[cfg(target_os = "macos")]
mod macos_tests {
use super::*;
use std::string::ToString;
#[tokio::test]
async fn test_run_with_profile_escaping_and_execution() {
let shell_path = "/bin/zsh";
let cases = vec![
(
vec!["myecho"],
vec![shell_path, "-lc", "source ZSHRC_PATH && (myecho)"],
Some("It works!\n"),
),
(
vec!["myecho"],
vec![shell_path, "-lc", "source ZSHRC_PATH && (myecho)"],
Some("It works!\n"),
),
(
vec!["bash", "-c", "echo 'single' \"double\""],
vec![
shell_path,
"-lc",
"source ZSHRC_PATH && (bash -c \"echo 'single' \\\"double\\\"\")",
],
Some("single double\n"),
),
(
vec!["bash", "-lc", "echo 'single' \"double\""],
vec![
shell_path,
"-lc",
"source ZSHRC_PATH && (echo 'single' \"double\")",
],
Some("single double\n"),
),
];
for (input, expected_cmd, expected_output) in cases {
use std::collections::HashMap;
use std::path::PathBuf;
use crate::exec::ExecParams;
use crate::exec::SandboxType;
use crate::exec::process_exec_tool_call;
use crate::protocol::SandboxPolicy;
// create a temp directory with a zshrc file in it
let temp_home = tempfile::tempdir().unwrap();
let zshrc_path = temp_home.path().join(".zshrc");
std::fs::write(
&zshrc_path,
r#"
set -x
function myecho {
echo 'It works!'
}
"#,
)
.unwrap();
let shell = Shell::Zsh(ZshShell {
shell_path: shell_path.to_string(),
zshrc_path: zshrc_path.to_str().unwrap().to_string(),
});
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(ToString::to_string).collect());
let expected_cmd = expected_cmd
.iter()
.map(|s| s.replace("ZSHRC_PATH", zshrc_path.to_str().unwrap()))
.collect();
assert_eq!(actual_cmd, Some(expected_cmd));
// Actually run the command and check output/exit code
let output = process_exec_tool_call(
ExecParams {
command: actual_cmd.unwrap(),
cwd: PathBuf::from(temp_home.path()),
timeout_ms: None,
env: HashMap::from([(
"HOME".to_string(),
temp_home.path().to_str().unwrap().to_string(),
)]),
with_escalated_permissions: None,
justification: None,
},
SandboxType::None,
&SandboxPolicy::DangerFullAccess,
temp_home.path(),
&None,
None,
)
.await
.unwrap();
assert_eq!(output.exit_code, 0, "input: {input:?} output: {output:?}");
if let Some(expected) = expected_output {
assert_eq!(
output.stdout.text, expected,
"input: {input:?} output: {output:?}"
);
}
}
}
}
#[cfg(test)]
#[cfg(target_os = "windows")]
mod tests_windows {
use super::*;
#[test]
fn test_format_default_shell_invocation_powershell() {
let cases = vec![
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: None,
}),
vec!["bash", "-lc", "echo hello"],
vec!["powershell.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["bash", "-lc", "echo hello"],
vec!["bash.exe", "-lc", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"bash",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
vec![
"bash.exe",
"-lc",
"apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: destination_file.txt\n-original content\n+modified content\n*** End Patch\nEOF",
],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["echo", "hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
Shell::PowerShell(PowerShellConfig {
exe: "pwsh.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
vec!["pwsh.exe", "-NoProfile", "-Command", "echo hello"],
),
(
// TODO (CODEX_2900): Handle escaping newlines for powershell invocation.
Shell::PowerShell(PowerShellConfig {
exe: "powershell.exe".to_string(),
bash_exe_fallback: Some(PathBuf::from("bash.exe")),
}),
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
vec![
"codex-mcp-server.exe",
"--codex-run-as-apply-patch",
"*** Begin Patch\n*** Update File: C:\\Users\\person\\destination_file.txt\n-original content\n+modified content\n*** End Patch",
],
),
];
for (shell, input, expected_cmd) in cases {
let actual_cmd = shell
.format_default_shell_invocation(input.iter().map(|s| (*s).to_string()).collect());
assert_eq!(
actual_cmd,
Some(expected_cmd.iter().map(|s| (*s).to_string()).collect())
);
}
}
}

View File

@@ -1,6 +1,9 @@
mod service;
mod session;
mod turn;
pub(crate) use codex_agent::session_state::SessionState;
pub(crate) use service::SessionServices;
pub(crate) use session::SessionState;
pub(crate) use turn::ActiveTurn;
pub(crate) use turn::RunningTask;
pub(crate) use turn::TaskKind;

View File

@@ -0,0 +1,20 @@
use crate::RolloutRecorder;
use crate::exec_command::ExecSessionManager;
use crate::git_worktree::WorktreeHandle;
use crate::mcp_connection_manager::McpConnectionManager;
use crate::unified_exec::UnifiedExecSessionManager;
use crate::user_notification::UserNotifier;
use std::path::PathBuf;
use tokio::sync::Mutex;
pub(crate) struct SessionServices {
pub(crate) mcp_connection_manager: McpConnectionManager,
pub(crate) session_manager: ExecSessionManager,
pub(crate) unified_exec_manager: UnifiedExecSessionManager,
pub(crate) notifier: UserNotifier,
pub(crate) rollout: Mutex<Option<RolloutRecorder>>,
pub(crate) worktree: Mutex<Option<WorktreeHandle>>,
pub(crate) codex_linux_sandbox_exe: Option<PathBuf>,
pub(crate) user_shell: crate::shell::Shell,
pub(crate) show_raw_agent_reasoning: bool,
}

View File

@@ -1,24 +1,26 @@
//! Session-wide mutable state.
use std::collections::HashSet;
use codex_protocol::models::ResponseItem;
use codex_protocol::protocol::RateLimitSnapshot;
use codex_protocol::protocol::TokenUsage;
use codex_protocol::protocol::TokenUsageInfo;
use crate::conversation_history::ConversationHistory;
use crate::protocol::RateLimitSnapshot;
use crate::protocol::TokenUsage;
use crate::protocol::TokenUsageInfo;
/// Persistent, session-scoped state previously stored directly on `Session`.
#[derive(Default)]
pub struct SessionState {
approved_commands: HashSet<Vec<String>>,
history: ConversationHistory,
token_info: Option<TokenUsageInfo>,
latest_rate_limits: Option<RateLimitSnapshot>,
pub(crate) struct SessionState {
pub(crate) approved_commands: HashSet<Vec<String>>,
pub(crate) history: ConversationHistory,
pub(crate) token_info: Option<TokenUsageInfo>,
pub(crate) latest_rate_limits: Option<RateLimitSnapshot>,
}
impl SessionState {
/// Create a new session state mirroring previous `State::default()` semantics.
pub fn new() -> Self {
pub(crate) fn new() -> Self {
Self {
history: ConversationHistory::new(),
..Default::default()
@@ -26,7 +28,7 @@ impl SessionState {
}
// History helpers
pub fn record_items<I>(&mut self, items: I)
pub(crate) fn record_items<I>(&mut self, items: I)
where
I: IntoIterator,
I::Item: std::ops::Deref<Target = ResponseItem>,
@@ -34,25 +36,25 @@ impl SessionState {
self.history.record_items(items)
}
pub fn history_snapshot(&self) -> Vec<ResponseItem> {
pub(crate) fn history_snapshot(&self) -> Vec<ResponseItem> {
self.history.contents()
}
pub fn replace_history(&mut self, items: Vec<ResponseItem>) {
pub(crate) fn replace_history(&mut self, items: Vec<ResponseItem>) {
self.history.replace(items);
}
// Approved command helpers
pub fn add_approved_command(&mut self, cmd: Vec<String>) {
pub(crate) fn add_approved_command(&mut self, cmd: Vec<String>) {
self.approved_commands.insert(cmd);
}
pub fn approved_commands_ref(&self) -> &HashSet<Vec<String>> {
pub(crate) fn approved_commands_ref(&self) -> &HashSet<Vec<String>> {
&self.approved_commands
}
// Token/rate limit helpers
pub fn update_token_info_from_usage(
pub(crate) fn update_token_info_from_usage(
&mut self,
usage: &TokenUsage,
model_context_window: Option<u64>,
@@ -64,13 +66,15 @@ impl SessionState {
);
}
pub fn set_rate_limits(&mut self, snapshot: RateLimitSnapshot) {
pub(crate) fn set_rate_limits(&mut self, snapshot: RateLimitSnapshot) {
self.latest_rate_limits = Some(snapshot);
}
pub fn token_info_and_rate_limits(
pub(crate) fn token_info_and_rate_limits(
&self,
) -> (Option<TokenUsageInfo>, Option<RateLimitSnapshot>) {
(self.token_info.clone(), self.latest_rate_limits.clone())
}
// Pending input/approval moved to TurnState.
}

View File

@@ -1 +1,182 @@
pub use codex_agent::token_data::*;
use base64::Engine;
use serde::Deserialize;
use serde::Serialize;
use thiserror::Error;
#[derive(Deserialize, Serialize, Clone, Debug, PartialEq, Default)]
pub struct TokenData {
/// Flat info parsed from the JWT in auth.json.
#[serde(
deserialize_with = "deserialize_id_token",
serialize_with = "serialize_id_token"
)]
pub id_token: IdTokenInfo,
/// This is a JWT.
pub access_token: String,
pub refresh_token: String,
pub account_id: Option<String>,
}
/// Flat subset of useful claims in id_token from auth.json.
#[derive(Debug, Clone, PartialEq, Eq, Default, Serialize, Deserialize)]
pub struct IdTokenInfo {
pub email: Option<String>,
/// The ChatGPT subscription plan type
/// (e.g., "free", "plus", "pro", "business", "enterprise", "edu").
/// (Note: values may vary by backend.)
pub(crate) chatgpt_plan_type: Option<PlanType>,
pub raw_jwt: String,
}
impl IdTokenInfo {
pub fn get_chatgpt_plan_type(&self) -> Option<String> {
self.chatgpt_plan_type.as_ref().map(|t| match t {
PlanType::Known(plan) => format!("{plan:?}"),
PlanType::Unknown(s) => s.clone(),
})
}
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(untagged)]
pub(crate) enum PlanType {
Known(KnownPlan),
Unknown(String),
}
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub(crate) enum KnownPlan {
Free,
Plus,
Pro,
Team,
Business,
Enterprise,
Edu,
}
#[derive(Deserialize)]
struct IdClaims {
#[serde(default)]
email: Option<String>,
#[serde(rename = "https://api.openai.com/auth", default)]
auth: Option<AuthClaims>,
}
#[derive(Deserialize)]
struct AuthClaims {
#[serde(default)]
chatgpt_plan_type: Option<PlanType>,
}
#[derive(Debug, Error)]
pub enum IdTokenInfoError {
#[error("invalid ID token format")]
InvalidFormat,
#[error(transparent)]
Base64(#[from] base64::DecodeError),
#[error(transparent)]
Json(#[from] serde_json::Error),
}
pub fn parse_id_token(id_token: &str) -> Result<IdTokenInfo, IdTokenInfoError> {
// JWT format: header.payload.signature
let mut parts = id_token.split('.');
let (_header_b64, payload_b64, _sig_b64) = match (parts.next(), parts.next(), parts.next()) {
(Some(h), Some(p), Some(s)) if !h.is_empty() && !p.is_empty() && !s.is_empty() => (h, p, s),
_ => return Err(IdTokenInfoError::InvalidFormat),
};
let payload_bytes = base64::engine::general_purpose::URL_SAFE_NO_PAD.decode(payload_b64)?;
let claims: IdClaims = serde_json::from_slice(&payload_bytes)?;
Ok(IdTokenInfo {
email: claims.email,
chatgpt_plan_type: claims.auth.and_then(|a| a.chatgpt_plan_type),
raw_jwt: id_token.to_string(),
})
}
fn deserialize_id_token<'de, D>(deserializer: D) -> Result<IdTokenInfo, D::Error>
where
D: serde::Deserializer<'de>,
{
let s = String::deserialize(deserializer)?;
parse_id_token(&s).map_err(serde::de::Error::custom)
}
fn serialize_id_token<S>(id_token: &IdTokenInfo, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(&id_token.raw_jwt)
}
#[cfg(test)]
mod tests {
use super::*;
use serde::Serialize;
#[test]
fn id_token_info_parses_email_and_plan() {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = serde_json::json!({
"email": "user@example.com",
"https://api.openai.com/auth": {
"chatgpt_plan_type": "pro"
}
});
fn b64url_no_pad(bytes: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes)
}
let header_b64 = b64url_no_pad(&serde_json::to_vec(&header).unwrap());
let payload_b64 = b64url_no_pad(&serde_json::to_vec(&payload).unwrap());
let signature_b64 = b64url_no_pad(b"sig");
let fake_jwt = format!("{header_b64}.{payload_b64}.{signature_b64}");
let info = parse_id_token(&fake_jwt).expect("should parse");
assert_eq!(info.email.as_deref(), Some("user@example.com"));
assert_eq!(info.get_chatgpt_plan_type().as_deref(), Some("Pro"));
}
#[test]
fn id_token_info_handles_missing_fields() {
#[derive(Serialize)]
struct Header {
alg: &'static str,
typ: &'static str,
}
let header = Header {
alg: "none",
typ: "JWT",
};
let payload = serde_json::json!({ "sub": "123" });
fn b64url_no_pad(bytes: &[u8]) -> String {
base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes)
}
let header_b64 = b64url_no_pad(&serde_json::to_vec(&header).unwrap());
let payload_b64 = b64url_no_pad(&serde_json::to_vec(&payload).unwrap());
let signature_b64 = b64url_no_pad(b"sig");
let fake_jwt = format!("{header_b64}.{payload_b64}.{signature_b64}");
let info = parse_id_token(&fake_jwt).expect("should parse");
assert!(info.email.is_none());
assert!(info.get_chatgpt_plan_type().is_none());
}
}

View File

@@ -1,3 +1,5 @@
use serde::Deserialize;
use serde::Serialize;
use std::collections::BTreeMap;
use crate::openai_tools::FreeformTool;
@@ -5,10 +7,16 @@ use crate::openai_tools::FreeformToolFormat;
use crate::openai_tools::JsonSchema;
use crate::openai_tools::OpenAiTool;
use crate::openai_tools::ResponsesApiTool;
pub use codex_agent::ApplyPatchToolType;
const APPLY_PATCH_LARK_GRAMMAR: &str = include_str!("tool_apply_patch.lark");
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(rename_all = "snake_case")]
pub enum ApplyPatchToolType {
Freeform,
Function,
}
/// Returns a custom tool that can be used to edit files. Well-suited for GPT-5 models
/// https://platform.openai.com/docs/guides/function-calling#custom-tools
pub(crate) fn create_apply_patch_freeform_tool() -> OpenAiTool {

View File

@@ -1 +1,180 @@
pub use codex_agent::truncate::*;
//! Utilities for truncating large chunks of output while preserving a prefix
//! and suffix on UTF-8 boundaries.
/// Truncate the middle of a UTF-8 string to at most `max_bytes` bytes,
/// preserving the beginning and the end. Returns the possibly truncated
/// string and `Some(original_token_count)` (estimated at 4 bytes/token)
/// if truncation occurred; otherwise returns the original string and `None`.
pub(crate) fn truncate_middle(s: &str, max_bytes: usize) -> (String, Option<u64>) {
if s.len() <= max_bytes {
return (s.to_string(), None);
}
let est_tokens = (s.len() as u64).div_ceil(4);
if max_bytes == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
fn truncate_on_boundary(input: &str, max_len: usize) -> &str {
if input.len() <= max_len {
return input;
}
let mut end = max_len;
while end > 0 && !input.is_char_boundary(end) {
end -= 1;
}
&input[..end]
}
fn pick_prefix_end(s: &str, left_budget: usize) -> usize {
if let Some(head) = s.get(..left_budget)
&& let Some(i) = head.rfind('\n')
{
return i + 1;
}
truncate_on_boundary(s, left_budget).len()
}
fn pick_suffix_start(s: &str, right_budget: usize) -> usize {
let start_tail = s.len().saturating_sub(right_budget);
if let Some(tail) = s.get(start_tail..)
&& let Some(i) = tail.find('\n')
{
return start_tail + i + 1;
}
let mut idx = start_tail.min(s.len());
while idx < s.len() && !s.is_char_boundary(idx) {
idx += 1;
}
idx
}
let mut guess_tokens = est_tokens;
for _ in 0..4 {
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let mut suffix_start = pick_suffix_start(s, right_budget);
if suffix_start < prefix_end {
suffix_start = prefix_end;
}
let kept_content_bytes = prefix_end + (s.len() - suffix_start);
let truncated_content_bytes = s.len().saturating_sub(kept_content_bytes);
let new_tokens = (truncated_content_bytes as u64).div_ceil(4);
if new_tokens == guess_tokens {
let mut out = String::with_capacity(marker_len + kept_content_bytes + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
return (out, Some(est_tokens));
}
guess_tokens = new_tokens;
}
let marker = format!("{guess_tokens} tokens truncated…");
let marker_len = marker.len();
let keep_budget = max_bytes.saturating_sub(marker_len);
if keep_budget == 0 {
return (format!("{est_tokens} tokens truncated…"), Some(est_tokens));
}
let left_budget = keep_budget / 2;
let right_budget = keep_budget - left_budget;
let prefix_end = pick_prefix_end(s, left_budget);
let suffix_start = pick_suffix_start(s, right_budget);
let mut out = String::with_capacity(marker_len + prefix_end + (s.len() - suffix_start) + 1);
out.push_str(&s[..prefix_end]);
out.push_str(&marker);
out.push('\n');
out.push_str(&s[suffix_start..]);
(out, Some(est_tokens))
}
#[cfg(test)]
mod tests {
use super::truncate_middle;
#[test]
fn truncate_middle_no_newlines_fallback() {
let s = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ*";
let max_bytes = 32;
let (out, original) = truncate_middle(s, max_bytes);
assert!(out.starts_with("abc"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("XYZ*"));
assert_eq!(original, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries() {
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
assert_eq!(s.len(), 80);
let max_bytes = 64;
let (out, tokens) = truncate_middle(&s, max_bytes);
assert!(out.starts_with("001\n002\n003\n004\n"));
assert!(out.contains("tokens truncated"));
assert!(out.ends_with("017\n018\n019\n020\n"));
assert_eq!(tokens, Some(20));
}
#[test]
fn truncate_middle_handles_utf8_content() {
let s = "😀😀😀😀😀😀😀😀😀😀\nsecond line with ascii text\n";
let max_bytes = 32;
let (out, tokens) = truncate_middle(s, max_bytes);
assert!(out.contains("tokens truncated"));
assert!(!out.contains('\u{fffd}'));
assert_eq!(tokens, Some((s.len() as u64).div_ceil(4)));
}
#[test]
fn truncate_middle_prefers_newline_boundaries_2() {
// Build a multi-line string of 20 numbered lines (each "NNN\n").
let mut s = String::new();
for i in 1..=20 {
s.push_str(&format!("{i:03}\n"));
}
// Total length: 20 lines * 4 bytes per line = 80 bytes.
assert_eq!(s.len(), 80);
// Choose a cap that forces truncation while leaving room for
// a few lines on each side after accounting for the marker.
let max_bytes = 64;
// Expect exact output: first 4 lines, marker, last 4 lines, and correct token estimate (80/4 = 20).
assert_eq!(
truncate_middle(&s, max_bytes),
(
r#"001
002
003
004
…12 tokens truncated…
017
018
019
020
"#
.to_string(),
Some(20)
)
);
}
}

View File

@@ -1 +1,896 @@
pub use codex_agent::turn_diff_tracker::*;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use std::process::Command;
use anyhow::Context;
use anyhow::Result;
use anyhow::anyhow;
use sha1::digest::Output;
use uuid::Uuid;
use crate::protocol::FileChange;
const ZERO_OID: &str = "0000000000000000000000000000000000000000";
const DEV_NULL: &str = "/dev/null";
struct BaselineFileInfo {
path: PathBuf,
content: Vec<u8>,
mode: FileMode,
oid: String,
}
/// Tracks sets of changes to files and exposes the overall unified diff.
/// Internally, the way this works is now:
/// 1. Maintain an in-memory baseline snapshot of files when they are first seen.
/// For new additions, do not create a baseline so that diffs are shown as proper additions (using /dev/null).
/// 2. Keep a stable internal filename (uuid) per external path for rename tracking.
/// 3. To compute the aggregated unified diff, compare each baseline snapshot to the current file on disk entirely in-memory
/// using the `similar` crate and emit unified diffs with rewritten external paths.
#[derive(Default)]
pub struct TurnDiffTracker {
/// Map external path -> internal filename (uuid).
external_to_temp_name: HashMap<PathBuf, String>,
/// Internal filename -> baseline file info.
baseline_file_info: HashMap<String, BaselineFileInfo>,
/// Internal filename -> external path as of current accumulated state (after applying all changes).
/// This is where renames are tracked.
temp_name_to_current_path: HashMap<String, PathBuf>,
/// Cache of known git worktree roots to avoid repeated filesystem walks.
git_root_cache: Vec<PathBuf>,
}
impl TurnDiffTracker {
pub fn new() -> Self {
Self::default()
}
/// Front-run apply patch calls to track the starting contents of any modified files.
/// - Creates an in-memory baseline snapshot for files that already exist on disk when first seen.
/// - For additions, we intentionally do not create a baseline snapshot so that diffs are proper additions.
/// - Also updates internal mappings for move/rename events.
pub fn on_patch_begin(&mut self, changes: &HashMap<PathBuf, FileChange>) {
for (path, change) in changes.iter() {
// Ensure a stable internal filename exists for this external path.
if !self.external_to_temp_name.contains_key(path) {
let internal = Uuid::new_v4().to_string();
self.external_to_temp_name
.insert(path.clone(), internal.clone());
self.temp_name_to_current_path
.insert(internal.clone(), path.clone());
// If the file exists on disk now, snapshot as baseline; else leave missing to represent /dev/null.
let baseline_file_info = if path.exists() {
let mode = file_mode_for_path(path);
let mode_val = mode.unwrap_or(FileMode::Regular);
let content = blob_bytes(path, mode_val).unwrap_or_default();
let oid = if mode == Some(FileMode::Symlink) {
format!("{:x}", git_blob_sha1_hex_bytes(&content))
} else {
self.git_blob_oid_for_path(path)
.unwrap_or_else(|| format!("{:x}", git_blob_sha1_hex_bytes(&content)))
};
Some(BaselineFileInfo {
path: path.clone(),
content,
mode: mode_val,
oid,
})
} else {
Some(BaselineFileInfo {
path: path.clone(),
content: vec![],
mode: FileMode::Regular,
oid: ZERO_OID.to_string(),
})
};
if let Some(baseline_file_info) = baseline_file_info {
self.baseline_file_info
.insert(internal.clone(), baseline_file_info);
}
}
// Track rename/move in current mapping if provided in an Update.
if let FileChange::Update {
move_path: Some(dest),
..
} = change
{
let uuid_filename = match self.external_to_temp_name.get(path) {
Some(i) => i.clone(),
None => {
// This should be rare, but if we haven't mapped the source, create it with no baseline.
let i = Uuid::new_v4().to_string();
self.baseline_file_info.insert(
i.clone(),
BaselineFileInfo {
path: path.clone(),
content: vec![],
mode: FileMode::Regular,
oid: ZERO_OID.to_string(),
},
);
i
}
};
// Update current external mapping for temp file name.
self.temp_name_to_current_path
.insert(uuid_filename.clone(), dest.clone());
// Update forward file_mapping: external current -> internal name.
self.external_to_temp_name.remove(path);
self.external_to_temp_name
.insert(dest.clone(), uuid_filename);
};
}
}
fn get_path_for_internal(&self, internal: &str) -> Option<PathBuf> {
self.temp_name_to_current_path
.get(internal)
.cloned()
.or_else(|| {
self.baseline_file_info
.get(internal)
.map(|info| info.path.clone())
})
}
/// Find the git worktree root for a file/directory by walking up to the first ancestor containing a `.git` entry.
/// Uses a simple cache of known roots and avoids negative-result caching for simplicity.
fn find_git_root_cached(&mut self, start: &Path) -> Option<PathBuf> {
let dir = if start.is_dir() {
start
} else {
start.parent()?
};
// Fast path: if any cached root is an ancestor of this path, use it.
if let Some(root) = self
.git_root_cache
.iter()
.find(|r| dir.starts_with(r))
.cloned()
{
return Some(root);
}
// Walk up to find a `.git` marker.
let mut cur = dir.to_path_buf();
loop {
let git_marker = cur.join(".git");
if git_marker.is_dir() || git_marker.is_file() {
if !self.git_root_cache.iter().any(|r| r == &cur) {
self.git_root_cache.push(cur.clone());
}
return Some(cur);
}
// On Windows, avoid walking above the drive or UNC share root.
#[cfg(windows)]
{
if is_windows_drive_or_unc_root(&cur) {
return None;
}
}
if let Some(parent) = cur.parent() {
cur = parent.to_path_buf();
} else {
return None;
}
}
}
/// Return a display string for `path` relative to its git root if found, else absolute.
fn relative_to_git_root_str(&mut self, path: &Path) -> String {
let s = if let Some(root) = self.find_git_root_cached(path) {
if let Ok(rel) = path.strip_prefix(&root) {
rel.display().to_string()
} else {
path.display().to_string()
}
} else {
path.display().to_string()
};
s.replace('\\', "/")
}
/// Ask git to compute the blob SHA-1 for the file at `path` within its repository.
/// Returns None if no repository is found or git invocation fails.
fn git_blob_oid_for_path(&mut self, path: &Path) -> Option<String> {
let root = self.find_git_root_cached(path)?;
// Compute a path relative to the repo root for better portability across platforms.
let rel = path.strip_prefix(&root).unwrap_or(path);
let output = Command::new("git")
.arg("-C")
.arg(&root)
.arg("hash-object")
.arg("--")
.arg(rel)
.output()
.ok()?;
if !output.status.success() {
return None;
}
let s = String::from_utf8_lossy(&output.stdout).trim().to_string();
if s.len() == 40 { Some(s) } else { None }
}
/// Recompute the aggregated unified diff by comparing all of the in-memory snapshots that were
/// collected before the first time they were touched by apply_patch during this turn with
/// the current repo state.
pub fn get_unified_diff(&mut self) -> Result<Option<String>> {
let mut aggregated = String::new();
// Compute diffs per tracked internal file in a stable order by external path.
let mut baseline_file_names: Vec<String> =
self.baseline_file_info.keys().cloned().collect();
// Sort lexicographically by full repo-relative path to match git behavior.
baseline_file_names.sort_by_key(|internal| {
self.get_path_for_internal(internal)
.map(|p| self.relative_to_git_root_str(&p))
.unwrap_or_default()
});
for internal in baseline_file_names {
aggregated.push_str(self.get_file_diff(&internal).as_str());
if !aggregated.ends_with('\n') {
aggregated.push('\n');
}
}
if aggregated.trim().is_empty() {
Ok(None)
} else {
Ok(Some(aggregated))
}
}
fn get_file_diff(&mut self, internal_file_name: &str) -> String {
let mut aggregated = String::new();
// Snapshot lightweight fields only.
let (baseline_external_path, baseline_mode, left_oid) = {
if let Some(info) = self.baseline_file_info.get(internal_file_name) {
(info.path.clone(), info.mode, info.oid.clone())
} else {
(PathBuf::new(), FileMode::Regular, ZERO_OID.to_string())
}
};
let current_external_path = match self.get_path_for_internal(internal_file_name) {
Some(p) => p,
None => return aggregated,
};
let current_mode = file_mode_for_path(&current_external_path).unwrap_or(FileMode::Regular);
let right_bytes = blob_bytes(&current_external_path, current_mode);
// Compute displays with &mut self before borrowing any baseline content.
let left_display = self.relative_to_git_root_str(&baseline_external_path);
let right_display = self.relative_to_git_root_str(&current_external_path);
// Compute right oid before borrowing baseline content.
let right_oid = if let Some(b) = right_bytes.as_ref() {
if current_mode == FileMode::Symlink {
format!("{:x}", git_blob_sha1_hex_bytes(b))
} else {
self.git_blob_oid_for_path(&current_external_path)
.unwrap_or_else(|| format!("{:x}", git_blob_sha1_hex_bytes(b)))
}
} else {
ZERO_OID.to_string()
};
// Borrow baseline content only after all &mut self uses are done.
let left_present = left_oid.as_str() != ZERO_OID;
let left_bytes: Option<&[u8]> = if left_present {
self.baseline_file_info
.get(internal_file_name)
.map(|i| i.content.as_slice())
} else {
None
};
// Fast path: identical bytes or both missing.
if left_bytes == right_bytes.as_deref() {
return aggregated;
}
aggregated.push_str(&format!("diff --git a/{left_display} b/{right_display}\n"));
let is_add = !left_present && right_bytes.is_some();
let is_delete = left_present && right_bytes.is_none();
if is_add {
aggregated.push_str(&format!("new file mode {current_mode}\n"));
} else if is_delete {
aggregated.push_str(&format!("deleted file mode {baseline_mode}\n"));
} else if baseline_mode != current_mode {
aggregated.push_str(&format!("old mode {baseline_mode}\n"));
aggregated.push_str(&format!("new mode {current_mode}\n"));
}
let left_text = left_bytes.and_then(|b| std::str::from_utf8(b).ok());
let right_text = right_bytes
.as_deref()
.and_then(|b| std::str::from_utf8(b).ok());
let can_text_diff = matches!(
(left_text, right_text, is_add, is_delete),
(Some(_), Some(_), _, _) | (_, Some(_), true, _) | (Some(_), _, _, true)
);
if can_text_diff {
let l = left_text.unwrap_or("");
let r = right_text.unwrap_or("");
aggregated.push_str(&format!("index {left_oid}..{right_oid}\n"));
let old_header = if left_present {
format!("a/{left_display}")
} else {
DEV_NULL.to_string()
};
let new_header = if right_bytes.is_some() {
format!("b/{right_display}")
} else {
DEV_NULL.to_string()
};
let diff = similar::TextDiff::from_lines(l, r);
let unified = diff
.unified_diff()
.context_radius(3)
.header(&old_header, &new_header)
.to_string();
aggregated.push_str(&unified);
} else {
aggregated.push_str(&format!("index {left_oid}..{right_oid}\n"));
let old_header = if left_present {
format!("a/{left_display}")
} else {
DEV_NULL.to_string()
};
let new_header = if right_bytes.is_some() {
format!("b/{right_display}")
} else {
DEV_NULL.to_string()
};
aggregated.push_str(&format!("--- {old_header}\n"));
aggregated.push_str(&format!("+++ {new_header}\n"));
aggregated.push_str("Binary files differ\n");
}
aggregated
}
}
/// Compute the Git SHA-1 blob object ID for the given content (bytes).
fn git_blob_sha1_hex_bytes(data: &[u8]) -> Output<sha1::Sha1> {
// Git blob hash is sha1 of: "blob <len>\0<data>"
let header = format!("blob {}\0", data.len());
use sha1::Digest;
let mut hasher = sha1::Sha1::new();
hasher.update(header.as_bytes());
hasher.update(data);
hasher.finalize()
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum FileMode {
Regular,
#[cfg(unix)]
Executable,
Symlink,
}
impl FileMode {
fn as_str(self) -> &'static str {
match self {
FileMode::Regular => "100644",
#[cfg(unix)]
FileMode::Executable => "100755",
FileMode::Symlink => "120000",
}
}
}
impl std::fmt::Display for FileMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}
#[cfg(unix)]
fn file_mode_for_path(path: &Path) -> Option<FileMode> {
use std::os::unix::fs::PermissionsExt;
let meta = fs::symlink_metadata(path).ok()?;
let ft = meta.file_type();
if ft.is_symlink() {
return Some(FileMode::Symlink);
}
let mode = meta.permissions().mode();
let is_exec = (mode & 0o111) != 0;
Some(if is_exec {
FileMode::Executable
} else {
FileMode::Regular
})
}
#[cfg(not(unix))]
fn file_mode_for_path(_path: &Path) -> Option<FileMode> {
// Default to non-executable on non-unix.
Some(FileMode::Regular)
}
fn blob_bytes(path: &Path, mode: FileMode) -> Option<Vec<u8>> {
if path.exists() {
let contents = if mode == FileMode::Symlink {
symlink_blob_bytes(path)
.ok_or_else(|| anyhow!("failed to read symlink target for {}", path.display()))
} else {
fs::read(path)
.with_context(|| format!("failed to read current file for diff {}", path.display()))
};
contents.ok()
} else {
None
}
}
#[cfg(unix)]
fn symlink_blob_bytes(path: &Path) -> Option<Vec<u8>> {
use std::os::unix::ffi::OsStrExt;
let target = std::fs::read_link(path).ok()?;
Some(target.as_os_str().as_bytes().to_vec())
}
#[cfg(not(unix))]
fn symlink_blob_bytes(_path: &Path) -> Option<Vec<u8>> {
None
}
#[cfg(windows)]
fn is_windows_drive_or_unc_root(p: &std::path::Path) -> bool {
use std::path::Component;
let mut comps = p.components();
matches!(
(comps.next(), comps.next(), comps.next()),
(Some(Component::Prefix(_)), Some(Component::RootDir), None)
)
}
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use tempfile::tempdir;
/// Compute the Git SHA-1 blob object ID for the given content (string).
/// This delegates to the bytes version to avoid UTF-8 lossy conversions here.
fn git_blob_sha1_hex(data: &str) -> String {
format!("{:x}", git_blob_sha1_hex_bytes(data.as_bytes()))
}
fn normalize_diff_for_test(input: &str, root: &Path) -> String {
let root_str = root.display().to_string().replace('\\', "/");
let replaced = input.replace(&root_str, "<TMP>");
// Split into blocks on lines starting with "diff --git ", sort blocks for determinism, and rejoin
let mut blocks: Vec<String> = Vec::new();
let mut current = String::new();
for line in replaced.lines() {
if line.starts_with("diff --git ") && !current.is_empty() {
blocks.push(current);
current = String::new();
}
if !current.is_empty() {
current.push('\n');
}
current.push_str(line);
}
if !current.is_empty() {
blocks.push(current);
}
blocks.sort();
let mut out = blocks.join("\n");
if !out.ends_with('\n') {
out.push('\n');
}
out
}
#[test]
fn accumulates_add_and_update() {
let mut acc = TurnDiffTracker::new();
let dir = tempdir().unwrap();
let file = dir.path().join("a.txt");
// First patch: add file (baseline should be /dev/null).
let add_changes = HashMap::from([(
file.clone(),
FileChange::Add {
content: "foo\n".to_string(),
},
)]);
acc.on_patch_begin(&add_changes);
// Simulate apply: create the file on disk.
fs::write(&file, "foo\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/a.txt
@@ -0,0 +1 @@
+foo
"#,
)
};
assert_eq!(first, expected_first);
// Second patch: update the file on disk.
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Simulate apply: append a new line.
fs::write(&file, "foo\nbar\n").unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected_combined = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\nbar\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/a.txt
@@ -0,0 +1,2 @@
+foo
+bar
"#,
)
};
assert_eq!(combined, expected_combined);
}
#[test]
fn accumulates_delete() {
let dir = tempdir().unwrap();
let file = dir.path().join("b.txt");
fs::write(&file, "x\n").unwrap();
let mut acc = TurnDiffTracker::new();
let del_changes = HashMap::from([(
file.clone(),
FileChange::Delete {
content: "x\n".to_string(),
},
)]);
acc.on_patch_begin(&del_changes);
// Simulate apply: delete the file from disk.
let baseline_mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
fs::remove_file(&file).unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let left_oid = git_blob_sha1_hex("x\n");
format!(
r#"diff --git a/<TMP>/b.txt b/<TMP>/b.txt
deleted file mode {baseline_mode}
index {left_oid}..{ZERO_OID}
--- a/<TMP>/b.txt
+++ {DEV_NULL}
@@ -1 +0,0 @@
-x
"#,
)
};
assert_eq!(diff, expected);
}
#[test]
fn accumulates_move_and_update() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.txt");
let dest = dir.path().join("dst.txt");
fs::write(&src, "line\n").unwrap();
let mut acc = TurnDiffTracker::new();
let mv_changes = HashMap::from([(
src.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv_changes);
// Simulate apply: move and update content.
fs::rename(&src, &dest).unwrap();
fs::write(&dest, "line2\n").unwrap();
let out = acc.get_unified_diff().unwrap().unwrap();
let out = normalize_diff_for_test(&out, dir.path());
let expected = {
let left_oid = git_blob_sha1_hex("line\n");
let right_oid = git_blob_sha1_hex("line2\n");
format!(
r#"diff --git a/<TMP>/src.txt b/<TMP>/dst.txt
index {left_oid}..{right_oid}
--- a/<TMP>/src.txt
+++ b/<TMP>/dst.txt
@@ -1 +1 @@
-line
+line2
"#
)
};
assert_eq!(out, expected);
}
#[test]
fn move_without_1change_yields_no_diff() {
let dir = tempdir().unwrap();
let src = dir.path().join("moved.txt");
let dest = dir.path().join("renamed.txt");
fs::write(&src, "same\n").unwrap();
let mut acc = TurnDiffTracker::new();
let mv_changes = HashMap::from([(
src.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv_changes);
// Simulate apply: move only, no content change.
fs::rename(&src, &dest).unwrap();
let diff = acc.get_unified_diff().unwrap();
assert_eq!(diff, None);
}
#[test]
fn move_declared_but_file_only_appears_at_dest_is_add() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.txt");
let dest = dir.path().join("dest.txt");
let mut acc = TurnDiffTracker::new();
let mv = HashMap::from([(
src,
FileChange::Update {
unified_diff: "".into(),
move_path: Some(dest.clone()),
},
)]);
acc.on_patch_begin(&mv);
// No file existed initially; create only dest
fs::write(&dest, "hello\n").unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let mode = file_mode_for_path(&dest).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("hello\n");
format!(
r#"diff --git a/<TMP>/src.txt b/<TMP>/dest.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/dest.txt
@@ -0,0 +1 @@
+hello
"#,
)
};
assert_eq!(diff, expected);
}
#[test]
fn update_persists_across_new_baseline_for_new_file() {
let dir = tempdir().unwrap();
let a = dir.path().join("a.txt");
let b = dir.path().join("b.txt");
fs::write(&a, "foo\n").unwrap();
fs::write(&b, "z\n").unwrap();
let mut acc = TurnDiffTracker::new();
// First: update existing a.txt (baseline snapshot is created for a).
let update_a = HashMap::from([(
a.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_a);
// Simulate apply: modify a.txt on disk.
fs::write(&a, "foo\nbar\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let left_oid = git_blob_sha1_hex("foo\n");
let right_oid = git_blob_sha1_hex("foo\nbar\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
index {left_oid}..{right_oid}
--- a/<TMP>/a.txt
+++ b/<TMP>/a.txt
@@ -1 +1,2 @@
foo
+bar
"#
)
};
assert_eq!(first, expected_first);
// Next: introduce a brand-new path b.txt into baseline snapshots via a delete change.
let del_b = HashMap::from([(
b.clone(),
FileChange::Delete {
content: "z\n".to_string(),
},
)]);
acc.on_patch_begin(&del_b);
// Simulate apply: delete b.txt.
let baseline_mode = file_mode_for_path(&b).unwrap_or(FileMode::Regular);
fs::remove_file(&b).unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected = {
let left_oid_a = git_blob_sha1_hex("foo\n");
let right_oid_a = git_blob_sha1_hex("foo\nbar\n");
let left_oid_b = git_blob_sha1_hex("z\n");
format!(
r#"diff --git a/<TMP>/a.txt b/<TMP>/a.txt
index {left_oid_a}..{right_oid_a}
--- a/<TMP>/a.txt
+++ b/<TMP>/a.txt
@@ -1 +1,2 @@
foo
+bar
diff --git a/<TMP>/b.txt b/<TMP>/b.txt
deleted file mode {baseline_mode}
index {left_oid_b}..{ZERO_OID}
--- a/<TMP>/b.txt
+++ {DEV_NULL}
@@ -1 +0,0 @@
-z
"#,
)
};
assert_eq!(combined, expected);
}
#[test]
fn binary_files_differ_update() {
let dir = tempdir().unwrap();
let file = dir.path().join("bin.dat");
// Initial non-UTF8 bytes
let left_bytes: Vec<u8> = vec![0xff, 0xfe, 0xfd, 0x00];
// Updated non-UTF8 bytes
let right_bytes: Vec<u8> = vec![0x01, 0x02, 0x03, 0x00];
fs::write(&file, &left_bytes).unwrap();
let mut acc = TurnDiffTracker::new();
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Apply update on disk
fs::write(&file, &right_bytes).unwrap();
let diff = acc.get_unified_diff().unwrap().unwrap();
let diff = normalize_diff_for_test(&diff, dir.path());
let expected = {
let left_oid = format!("{:x}", git_blob_sha1_hex_bytes(&left_bytes));
let right_oid = format!("{:x}", git_blob_sha1_hex_bytes(&right_bytes));
format!(
r#"diff --git a/<TMP>/bin.dat b/<TMP>/bin.dat
index {left_oid}..{right_oid}
--- a/<TMP>/bin.dat
+++ b/<TMP>/bin.dat
Binary files differ
"#
)
};
assert_eq!(diff, expected);
}
#[test]
fn filenames_with_spaces_add_and_update() {
let mut acc = TurnDiffTracker::new();
let dir = tempdir().unwrap();
let file = dir.path().join("name with spaces.txt");
// First patch: add file (baseline should be /dev/null).
let add_changes = HashMap::from([(
file.clone(),
FileChange::Add {
content: "foo\n".to_string(),
},
)]);
acc.on_patch_begin(&add_changes);
// Simulate apply: create the file on disk.
fs::write(&file, "foo\n").unwrap();
let first = acc.get_unified_diff().unwrap().unwrap();
let first = normalize_diff_for_test(&first, dir.path());
let expected_first = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\n");
format!(
r#"diff --git a/<TMP>/name with spaces.txt b/<TMP>/name with spaces.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/name with spaces.txt
@@ -0,0 +1 @@
+foo
"#,
)
};
assert_eq!(first, expected_first);
// Second patch: update the file on disk.
let update_changes = HashMap::from([(
file.clone(),
FileChange::Update {
unified_diff: "".to_owned(),
move_path: None,
},
)]);
acc.on_patch_begin(&update_changes);
// Simulate apply: append a new line with a space.
fs::write(&file, "foo\nbar baz\n").unwrap();
let combined = acc.get_unified_diff().unwrap().unwrap();
let combined = normalize_diff_for_test(&combined, dir.path());
let expected_combined = {
let mode = file_mode_for_path(&file).unwrap_or(FileMode::Regular);
let right_oid = git_blob_sha1_hex("foo\nbar baz\n");
format!(
r#"diff --git a/<TMP>/name with spaces.txt b/<TMP>/name with spaces.txt
new file mode {mode}
index {ZERO_OID}..{right_oid}
--- {DEV_NULL}
+++ b/<TMP>/name with spaces.txt
@@ -0,0 +1,2 @@
+foo
+bar baz
"#,
)
};
assert_eq!(combined, expected_combined);
}
}

View File

@@ -1 +0,0 @@
pub use codex_agent::unified_exec::*;

View File

@@ -1,7 +1,7 @@
use thiserror::Error;
#[derive(Debug, Error)]
pub enum UnifiedExecError {
pub(crate) enum UnifiedExecError {
#[error("Failed to create unified exec session: {pty_error}")]
CreateSession {
#[source]

View File

@@ -22,27 +22,27 @@ use crate::truncate::truncate_middle;
mod errors;
pub use errors::UnifiedExecError;
pub(crate) use errors::UnifiedExecError;
const DEFAULT_TIMEOUT_MS: u64 = 1_000;
const MAX_TIMEOUT_MS: u64 = 60_000;
const UNIFIED_EXEC_OUTPUT_MAX_BYTES: usize = 128 * 1024; // 128 KiB
#[derive(Debug)]
pub struct UnifiedExecRequest<'a> {
pub(crate) struct UnifiedExecRequest<'a> {
pub session_id: Option<i32>,
pub input_chunks: &'a [String],
pub timeout_ms: Option<u64>,
}
#[derive(Debug, Clone, PartialEq)]
pub struct UnifiedExecResult {
pub(crate) struct UnifiedExecResult {
pub session_id: Option<i32>,
pub output: String,
}
#[derive(Debug, Default)]
pub struct UnifiedExecSessionManager {
pub(crate) struct UnifiedExecSessionManager {
next_session_id: AtomicI32,
sessions: Mutex<HashMap<i32, ManagedUnifiedExecSession>>,
}

View File

@@ -1,9 +1,7 @@
use serde_json::to_string;
use serde::Serialize;
use tracing::error;
use tracing::warn;
pub use codex_agent::notifications::UserNotification;
#[derive(Debug, Default)]
pub(crate) struct UserNotifier {
notify_command: Option<Vec<String>>,
@@ -19,7 +17,7 @@ impl UserNotifier {
}
fn invoke_notify(&self, notify_command: &[String], notification: &UserNotification) {
let Ok(json) = to_string(notification) else {
let Ok(json) = serde_json::to_string(&notification) else {
error!("failed to serialise notification payload");
return;
};
@@ -30,6 +28,7 @@ impl UserNotifier {
}
command.arg(json);
// Fire-and-forget we do not wait for completion.
if let Err(e) = command.spawn() {
warn!("failed to spawn notifier '{}': {e}", notify_command[0]);
}
@@ -41,3 +40,44 @@ impl UserNotifier {
}
}
}
/// User can configure a program that will receive notifications. Each
/// notification is serialized as JSON and passed as an argument to the
/// program.
#[derive(Debug, Clone, PartialEq, Serialize)]
#[serde(tag = "type", rename_all = "kebab-case")]
pub(crate) enum UserNotification {
#[serde(rename_all = "kebab-case")]
AgentTurnComplete {
turn_id: String,
/// Messages that the user sent to the agent to initiate the turn.
input_messages: Vec<String>,
/// The last message sent by the assistant in the turn.
last_assistant_message: Option<String>,
},
}
#[cfg(test)]
mod tests {
use super::*;
use anyhow::Result;
#[test]
fn test_user_notification() -> Result<()> {
let notification = UserNotification::AgentTurnComplete {
turn_id: "12345".to_string(),
input_messages: vec!["Rename `foo` to `bar` and update the callsites.".to_string()],
last_assistant_message: Some(
"Rename complete and verified `cargo build` succeeds.".to_string(),
),
};
let serialized = serde_json::to_string(&notification)?;
assert_eq!(
serialized,
r#"{"type":"agent-turn-complete","turn-id":"12345","input-messages":["Rename `foo` to `bar` and update the callsites."],"last-assistant-message":"Rename complete and verified `cargo build` succeeds."}"#
);
Ok(())
}
}

View File

@@ -1,6 +1,5 @@
use std::sync::Arc;
use codex_core::AgentConfig;
use codex_core::ContentItem;
use codex_core::LocalShellAction;
use codex_core::LocalShellExecAction;
@@ -70,10 +69,9 @@ async fn run_request(input: Vec<ResponseItem>) -> Value {
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let agent_config = Arc::new(AgentConfig::from(config.as_ref()));
let client = ModelClient::new(
Arc::clone(&agent_config),
Arc::clone(&config),
None,
provider,
effort,

View File

@@ -1,6 +1,5 @@
use std::sync::Arc;
use codex_core::AgentConfig;
use codex_core::ContentItem;
use codex_core::ModelClient;
use codex_core::ModelProviderInfo;
@@ -63,10 +62,9 @@ async fn run_stream(sse_body: &str) -> Vec<ResponseEvent> {
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let agent_config = Arc::new(AgentConfig::from(config.as_ref()));
let client = ModelClient::new(
Arc::clone(&agent_config),
Arc::clone(&config),
None,
provider,
effort,

View File

@@ -1,4 +1,3 @@
use codex_core::AgentConfig;
use codex_core::CodexAuth;
use codex_core::ContentItem;
use codex_core::ConversationManager;
@@ -362,6 +361,7 @@ async fn includes_conversation_id_and_model_headers_in_request() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn includes_base_instructions_override_in_request() {
skip_if_no_network!();
// Mock server
let server = MockServer::start().await;
@@ -559,6 +559,7 @@ async fn prefers_apikey_when_config_prefers_apikey_even_with_chatgpt_tokens() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn includes_user_instructions_message_in_request() {
skip_if_no_network!();
let server = MockServer::start().await;
let first = ResponseTemplate::new(200)
@@ -662,10 +663,9 @@ async fn azure_responses_request_includes_store_and_reasoning_ids() {
let effort = config.model_reasoning_effort;
let summary = config.model_reasoning_summary;
let config = Arc::new(config);
let agent_config = Arc::new(AgentConfig::from(config.as_ref()));
let client = ModelClient::new(
Arc::clone(&agent_config),
Arc::clone(&config),
None,
provider,
effort,
@@ -757,6 +757,7 @@ async fn azure_responses_request_includes_store_and_reasoning_ids() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn token_count_includes_rate_limits_snapshot() {
skip_if_no_network!();
let server = MockServer::start().await;
let sse_body = responses::sse(vec![responses::ev_completed_with_tokens("resp_rate", 123)]);
@@ -901,6 +902,7 @@ async fn token_count_includes_rate_limits_snapshot() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn usage_limit_error_emits_rate_limit_event() -> anyhow::Result<()> {
skip_if_no_network!(Ok(()));
let server = MockServer::start().await;
let response = ResponseTemplate::new(429)
@@ -980,6 +982,7 @@ async fn usage_limit_error_emits_rate_limit_event() -> anyhow::Result<()> {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn azure_overrides_assign_properties_used_for_responses_url() {
skip_if_no_network!();
let existing_env_var_with_random_value = if cfg!(windows) { "USERNAME" } else { "USER" };
// Mock server
@@ -1056,6 +1059,7 @@ async fn azure_overrides_assign_properties_used_for_responses_url() {
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn env_var_overrides_loaded_auth() {
skip_if_no_network!();
let existing_env_var_with_random_value = if cfg!(windows) { "USERNAME" } else { "USER" };
// Mock server

View File

@@ -12,9 +12,9 @@ mod fork_conversation;
mod json_result;
mod live_cli;
mod model_overrides;
mod multi_task_smoke;
mod prompt_caching;
mod review;
mod rmcp_client;
mod rollout_list_find;
mod seatbelt;
mod stream_error_allows_next_turn;

Some files were not shown because too many files have changed in this diff Show More