Compare commits

...

12 Commits

Author SHA1 Message Date
iceweasel-oai
ce9d7c59a4 checkpoint 2025-12-03 14:45:52 -08:00
Thibault Sottiaux
a8d5ad37b8 feat: experimental support for skills.md (#7412)
This change prototypes support for Skills with the CLI. This is an
**experimental** feature for internal testing.

---------

Co-authored-by: Gav Verma <gverma@openai.com>
2025-12-01 20:22:35 -08:00
Manoel Calixto
32e4a3a4d7 fix(tui): handle WSL clipboard image paths (#3990)
Fixes #3939 
Fixes #2803

## Summary
- convert Windows clipboard file paths into their `/mnt/<drive>`
equivalents when running inside WSL so pasted images resolve correctly
- add WSL detection helpers and share them with unit tests to cover both
native Windows and WSL clipboard normalization cases
- improve the test suite by exercising Windows path handling plus a
dedicated WSL conversion scenario and keeping the code path guarded by
targeted cfgs

## Testing
- just fmt
- cargo test -p codex-tui
- cargo clippy -p codex-tui --tests
- just fix -p codex-tui

## Screenshots
_Codex TUI screenshot:_
<img width="1880" height="848" alt="describe this copied image"
src="https://github.com/user-attachments/assets/c620d43c-f45c-451e-8893-e56ae85a5eea"
/>

_GitHub docs directory screenshot:_
<img width="1064" height="478" alt="image-copied"
src="https://github.com/user-attachments/assets/eb5eef6c-eb43-45a0-8bfe-25c35bcae753"
/>

Co-authored-by: Eric Traut <etraut@openai.com>
2025-12-01 16:54:20 -08:00
Steve Mostovoy
f443555728 fix(core): enable history lookup on windows (#7457)
- Add portable history log id helper to support inode-like tracking on
Unix and creation time on Windows
- Refactor history metadata and lookup to share code paths and allow
nonzero log ids across platforms
- Add coverage for lookup stability after appends
2025-12-01 16:29:01 -08:00
Celia Chen
ff4ca9959c [app-server] Add ImageView item (#7468)
Add view_image tool call as image_view item.

Before:
```
< {
<   "method": "codex/event/view_image_tool_call",
<   "params": {
<     "conversationId": "019adc2f-2922-7e43-ace9-64f394019616",
<     "id": "0",
<     "msg": {
<       "call_id": "call_nBQDxnTfZQtgjGpVoGuDnRjz",
<       "path": "/Users/celia/code/codex/codex-rs/app-server-protocol/codex-cli-login.png",
<       "type": "view_image_tool_call"
<     }
<   }
< }
```

After:
```
< {
<   "method": "item/started",
<   "params": {
<     "item": {
<       "id": "call_nBQDxnTfZQtgjGpVoGuDnRjz",
<       "path": "/Users/celia/code/codex/codex-rs/app-server-protocol/codex-cli-login.png",
<       "type": "imageView"
<     },
<     "threadId": "019adc2f-2922-7e43-ace9-64f394019616",
<     "turnId": "0"
<   }
< }

< {
<   "method": "item/completed",
<   "params": {
<     "item": {
<       "id": "call_nBQDxnTfZQtgjGpVoGuDnRjz",
<       "path": "/Users/celia/code/codex/codex-rs/app-server-protocol/codex-cli-login.png",
<       "type": "imageView"
<     },
<     "threadId": "019adc2f-2922-7e43-ace9-64f394019616",
<     "turnId": "0"
<   }
< }
```
2025-12-01 23:56:05 +00:00
Dylan Hurd
5b25915d7e fix(apply_patch) tests for shell_command (#7307)
## Summary
Adds test coverage for invocations of apply_patch via shell_command with
heredoc, to validate behavior.

## Testing
- [x] These are tests
2025-12-01 15:09:22 -08:00
Michael Bolin
c0564edebe chore: update to rmcp@0.10.0 to pick up support for custom client notifications (#7462)
In https://github.com/openai/codex/pull/7112, I updated our `rmcp`
dependency to point to a personal fork while I tried to upstream my
proposed change. Now that
https://github.com/modelcontextprotocol/rust-sdk/pull/556 has been
upstreamed and included in the `0.10.0` release of the crate, we can go
back to using the mainline release.
2025-12-01 14:01:50 -08:00
linuxmetel
c936c68c84 fix: prevent MCP startup failure on missing 'type' field (#7417)
Fix the issue #7416 that the codex-cli produce an error "MCP startup
failure on missing 'type' field" in the startup.

- Cause: serde in `convert_to_rmcp`
(`codex-rs/rmcp-client/src/utils.rs`) failed because no `r#type` value
was provided
- Fix: set a default `r#type` value in the corresponding structs
2025-12-01 13:58:20 -05:00
Kaden Gruizenga
41760f8a09 docs: clarify codex max defaults and xhigh availability (#7449)
## Summary
Adds the missing `xhigh` reasoning level everywhere it should have been
documented, and makes clear it only works with `gpt-5.1-codex-max`.

## Changes

* `docs/config.md`

* Add `xhigh` to the official list of reasoning levels with a note that
`xhigh` is exclusive to Codex Max.

* `docs/example-config.md`

* Update the example comment adding `xhigh` as a valid option but only
for Codex Max.

* `docs/faq.md`

  * Update the model recommendation to `GPT-5.1 Codex Max`.
* Mention that users can choose `high` or the newly documented `xhigh`
level when using Codex Max.
2025-12-01 10:46:53 -08:00
Albert O'Shea
440c7acd8f fix: nix build missing rmcp output hash (#7436)
Output hash for `rmcp-0.9.0` was missing from the nix package, (i.e.
`error: No hash was found while vendoring the git dependency
rmcp-0.9.0.`) blocking the build.
2025-12-01 10:45:31 -08:00
Ali Towaiji
0cc3b50228 Fix recent_commits(limit=0) returning 1 commit instead of 0 (#7334)
Fixes #7333

This is a small bug fix.

This PR fixes an inconsistency in `recent_commits` where `limit == 0`
still returns 1 commit due to the use of `limit.max(1)` when
constructing the `git log -n` argument.

Expected behavior: requesting 0 commits should return an empty list.

This PR:
- returns an empty `Vec` when `limit == 0`
- adds a test for `recent_commits(limit == 0)` that fails before the
change and passes afterwards
- maintains existing behavior for `limit > 0`

This aligns behavior with API expectations and avoids downstream
consumers misinterpreting the repository as having commit history when
`limit == 0` is used to explicitly request none.

Happy to adjust if the current behavior is intentional.
2025-12-01 10:14:36 -08:00
Owen Lin
8532876ad8 [app-server] fix: emit item/fileChange/outputDelta for file change items (#7399) 2025-12-01 17:52:34 +00:00
45 changed files with 3712 additions and 252 deletions

129
codex-rs/Cargo.lock generated
View File

@@ -1186,6 +1186,7 @@ dependencies = [
"seccompiler",
"serde",
"serde_json",
"serde_yaml",
"serial_test",
"sha1",
"sha2",
@@ -1683,6 +1684,8 @@ name = "codex-windows-sandbox"
version = "0.0.0"
dependencies = [
"anyhow",
"base64",
"chrono",
"codex-protocol",
"dirs-next",
"dunce",
@@ -1690,6 +1693,7 @@ dependencies = [
"serde",
"serde_json",
"tempfile",
"windows 0.58.0",
"windows-sys 0.52.0",
]
@@ -3110,7 +3114,7 @@ dependencies = [
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
"windows-core 0.61.2",
]
[[package]]
@@ -4464,6 +4468,12 @@ version = "1.0.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57c0d7b74b563b49d38dae00a0c37d4d6de9b432382b2892f0574ddcae73fd0a"
[[package]]
name = "pastey"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57d6c094ee800037dff99e02cab0eaf3142826586742a270ab3d7a62656bd27a"
[[package]]
name = "path-absolutize"
version = "3.1.1"
@@ -4739,7 +4749,7 @@ dependencies = [
"nix 0.30.1",
"tokio",
"tracing",
"windows",
"windows 0.61.3",
]
[[package]]
@@ -5142,8 +5152,9 @@ dependencies = [
[[package]]
name = "rmcp"
version = "0.9.0"
source = "git+https://github.com/bolinfest/rust-sdk?branch=pr556#4d9cc16f4c76c84486344f542ed9a3e9364019ba"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38b18323edc657390a6ed4d7a9110b0dec2dc3ed128eb2a123edfbafabdbddc5"
dependencies = [
"async-trait",
"base64",
@@ -5154,7 +5165,7 @@ dependencies = [
"http-body",
"http-body-util",
"oauth2",
"paste",
"pastey",
"pin-project-lite",
"process-wrap",
"rand 0.9.2",
@@ -5176,8 +5187,9 @@ dependencies = [
[[package]]
name = "rmcp-macros"
version = "0.9.0"
source = "git+https://github.com/bolinfest/rust-sdk?branch=pr556#4d9cc16f4c76c84486344f542ed9a3e9364019ba"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c75d0a62676bf8c8003c4e3c348e2ceb6a7b3e48323681aaf177fdccdac2ce50"
dependencies = [
"darling 0.21.3",
"proc-macro2",
@@ -5765,6 +5777,19 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "serde_yaml"
version = "0.9.34+deprecated"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a8b1a1a2ebf674015cc02edccce75287f1a0130d394307b36743c2f5d504b47"
dependencies = [
"indexmap 2.12.0",
"itoa",
"ryu",
"serde",
"unsafe-libyaml",
]
[[package]]
name = "serial2"
version = "0.2.31"
@@ -6979,6 +7004,12 @@ version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"
[[package]]
name = "unsafe-libyaml"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
[[package]]
name = "untrusted"
version = "0.9.0"
@@ -7377,6 +7408,16 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows"
version = "0.58.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dd04d41d93c4992d421894c18c8b43496aa748dd4c081bac0dc93eb0489272b6"
dependencies = [
"windows-core 0.58.0",
"windows-targets 0.52.6",
]
[[package]]
name = "windows"
version = "0.61.3"
@@ -7384,7 +7425,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9babd3a767a4c1aef6900409f85f5d53ce2544ccdfaa86dad48c91782c6d6893"
dependencies = [
"windows-collections",
"windows-core",
"windows-core 0.61.2",
"windows-future",
"windows-link 0.1.3",
"windows-numerics",
@@ -7396,7 +7437,20 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3beeceb5e5cfd9eb1d76b381630e82c4241ccd0d27f1a39ed41b2760b255c5e8"
dependencies = [
"windows-core",
"windows-core 0.61.2",
]
[[package]]
name = "windows-core"
version = "0.58.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ba6d44ec8c2591c134257ce647b7ea6b20335bf6379a27dac5f1641fcf59f99"
dependencies = [
"windows-implement 0.58.0",
"windows-interface 0.58.0",
"windows-result 0.2.0",
"windows-strings 0.1.0",
"windows-targets 0.52.6",
]
[[package]]
@@ -7405,11 +7459,11 @@ version = "0.61.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c0fdd3ddb90610c7638aa2b3a3ab2904fb9e5cdbecc643ddb3647212781c4ae3"
dependencies = [
"windows-implement",
"windows-interface",
"windows-implement 0.60.0",
"windows-interface 0.59.1",
"windows-link 0.1.3",
"windows-result",
"windows-strings",
"windows-result 0.3.4",
"windows-strings 0.4.2",
]
[[package]]
@@ -7418,11 +7472,22 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc6a41e98427b19fe4b73c550f060b59fa592d7d686537eebf9385621bfbad8e"
dependencies = [
"windows-core",
"windows-core 0.61.2",
"windows-link 0.1.3",
"windows-threading",
]
[[package]]
name = "windows-implement"
version = "0.58.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2bbd5b46c938e506ecbce286b6628a02171d56153ba733b6c741fc627ec9579b"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.104",
]
[[package]]
name = "windows-implement"
version = "0.60.0"
@@ -7434,6 +7499,17 @@ dependencies = [
"syn 2.0.104",
]
[[package]]
name = "windows-interface"
version = "0.58.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "053c4c462dc91d3b1504c6fe5a726dd15e216ba718e84a0e46a88fbe5ded3515"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.104",
]
[[package]]
name = "windows-interface"
version = "0.59.1"
@@ -7463,7 +7539,7 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9150af68066c4c5c07ddc0ce30421554771e528bde427614c61038bc2c92c2b1"
dependencies = [
"windows-core",
"windows-core 0.61.2",
"windows-link 0.1.3",
]
@@ -7474,8 +7550,17 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b8a9ed28765efc97bbc954883f4e6796c33a06546ebafacbabee9696967499e"
dependencies = [
"windows-link 0.1.3",
"windows-result",
"windows-strings",
"windows-result 0.3.4",
"windows-strings 0.4.2",
]
[[package]]
name = "windows-result"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d1043d8214f791817bab27572aaa8af63732e11bf84aa21a45a78d6c317ae0e"
dependencies = [
"windows-targets 0.52.6",
]
[[package]]
@@ -7487,6 +7572,16 @@ dependencies = [
"windows-link 0.1.3",
]
[[package]]
name = "windows-strings"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4cd9b125c486025df0eabcb585e62173c6c9eddcec5d117d3b6e8c30e2ee4d10"
dependencies = [
"windows-result 0.2.0",
"windows-targets 0.52.6",
]
[[package]]
name = "windows-strings"
version = "0.4.2"

View File

@@ -59,15 +59,15 @@ license = "Apache-2.0"
# Internal
app_test_support = { path = "app-server/tests/common" }
codex-ansi-escape = { path = "ansi-escape" }
codex-api = { path = "codex-api" }
codex-app-server = { path = "app-server" }
codex-app-server-protocol = { path = "app-server-protocol" }
codex-apply-patch = { path = "apply-patch" }
codex-arg0 = { path = "arg0" }
codex-async-utils = { path = "async-utils" }
codex-backend-client = { path = "backend-client" }
codex-api = { path = "codex-api" }
codex-client = { path = "codex-client" }
codex-chatgpt = { path = "chatgpt" }
codex-client = { path = "codex-client" }
codex-common = { path = "common" }
codex-core = { path = "core" }
codex-exec = { path = "exec" }
@@ -169,15 +169,16 @@ pulldown-cmark = "0.10"
rand = "0.9"
ratatui = "0.29.0"
ratatui-macros = "0.6.0"
regex-lite = "0.1.7"
regex = "1.12.2"
regex-lite = "0.1.7"
reqwest = "0.12"
rmcp = { version = "0.9.0", default-features = false }
rmcp = { version = "0.10.0", default-features = false }
schemars = "0.8.22"
seccompiler = "0.5.0"
sentry = "0.34.0"
serde = "1"
serde_json = "1"
serde_yaml = "0.9"
serde_with = "3.16"
serial_test = "3.2.0"
sha1 = "0.10.6"
@@ -288,7 +289,6 @@ opt-level = 0
# ratatui = { path = "../../ratatui" }
crossterm = { git = "https://github.com/nornagon/crossterm", branch = "nornagon/color-query" }
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
rmcp = { git = "https://github.com/bolinfest/rust-sdk", branch = "pr556" }
# Uncomment to debug local changes.
# rmcp = { path = "../../rust-sdk/crates/rmcp" }

View File

@@ -511,6 +511,7 @@ server_notification_definitions! {
ItemCompleted => "item/completed" (v2::ItemCompletedNotification),
AgentMessageDelta => "item/agentMessage/delta" (v2::AgentMessageDeltaNotification),
CommandExecutionOutputDelta => "item/commandExecution/outputDelta" (v2::CommandExecutionOutputDeltaNotification),
FileChangeOutputDelta => "item/fileChange/outputDelta" (v2::FileChangeOutputDeltaNotification),
McpToolCallProgress => "item/mcpToolCall/progress" (v2::McpToolCallProgressNotification),
AccountUpdated => "account/updated" (v2::AccountUpdatedNotification),
AccountRateLimitsUpdated => "account/rateLimits/updated" (v2::AccountRateLimitsUpdatedNotification),

View File

@@ -1353,6 +1353,16 @@ pub struct CommandExecutionOutputDeltaNotification {
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]
pub struct FileChangeOutputDeltaNotification {
pub thread_id: String,
pub turn_id: String,
pub item_id: String,
pub delta: String,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, JsonSchema, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export_to = "v2/")]

View File

@@ -18,6 +18,7 @@ use codex_app_server_protocol::ContextCompactedNotification;
use codex_app_server_protocol::ErrorNotification;
use codex_app_server_protocol::ExecCommandApprovalParams;
use codex_app_server_protocol::ExecCommandApprovalResponse;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalParams;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::FileUpdateChange;
@@ -350,6 +351,28 @@ pub(crate) async fn apply_bespoke_event_handling(
}))
.await;
}
EventMsg::ViewImageToolCall(view_image_event) => {
let item = ThreadItem::ImageView {
id: view_image_event.call_id.clone(),
path: view_image_event.path.to_string_lossy().into_owned(),
};
let started = ItemStartedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item: item.clone(),
};
outgoing
.send_server_notification(ServerNotification::ItemStarted(started))
.await;
let completed = ItemCompletedNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item,
};
outgoing
.send_server_notification(ServerNotification::ItemCompleted(completed))
.await;
}
EventMsg::EnteredReviewMode(review_request) => {
let review = review_request.user_facing_hint;
let item = ThreadItem::EnteredReviewMode {
@@ -501,17 +524,44 @@ pub(crate) async fn apply_bespoke_event_handling(
.await;
}
EventMsg::ExecCommandOutputDelta(exec_command_output_delta_event) => {
let notification = CommandExecutionOutputDeltaNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item_id: exec_command_output_delta_event.call_id.clone(),
delta: String::from_utf8_lossy(&exec_command_output_delta_event.chunk).to_string(),
let item_id = exec_command_output_delta_event.call_id.clone();
let delta = String::from_utf8_lossy(&exec_command_output_delta_event.chunk).to_string();
// The underlying EventMsg::ExecCommandOutputDelta is used for shell, unified_exec,
// and apply_patch tool calls. We represent apply_patch with the FileChange item, and
// everything else with the CommandExecution item.
//
// We need to detect which item type it is so we can emit the right notification.
// We already have state tracking FileChange items on item/started, so let's use that.
let is_file_change = {
let map = turn_summary_store.lock().await;
map.get(&conversation_id)
.is_some_and(|summary| summary.file_change_started.contains(&item_id))
};
outgoing
.send_server_notification(ServerNotification::CommandExecutionOutputDelta(
notification,
))
.await;
if is_file_change {
let notification = FileChangeOutputDeltaNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item_id,
delta,
};
outgoing
.send_server_notification(ServerNotification::FileChangeOutputDelta(
notification,
))
.await;
} else {
let notification = CommandExecutionOutputDeltaNotification {
thread_id: conversation_id.to_string(),
turn_id: event_turn_id.clone(),
item_id,
delta,
};
outgoing
.send_server_notification(ServerNotification::CommandExecutionOutputDelta(
notification,
))
.await;
}
}
EventMsg::ExecCommandEnd(exec_command_end_event) => {
let ExecCommandEndEvent {

View File

@@ -11,6 +11,7 @@ use app_test_support::to_response;
use codex_app_server_protocol::ApprovalDecision;
use codex_app_server_protocol::CommandExecutionRequestApprovalResponse;
use codex_app_server_protocol::CommandExecutionStatus;
use codex_app_server_protocol::FileChangeOutputDeltaNotification;
use codex_app_server_protocol::FileChangeRequestApprovalResponse;
use codex_app_server_protocol::ItemCompletedNotification;
use codex_app_server_protocol::ItemStartedNotification;
@@ -725,6 +726,26 @@ async fn turn_start_file_change_approval_v2() -> Result<()> {
)
.await?;
let output_delta_notif = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_notification_message("item/fileChange/outputDelta"),
)
.await??;
let output_delta: FileChangeOutputDeltaNotification = serde_json::from_value(
output_delta_notif
.params
.clone()
.expect("item/fileChange/outputDelta params"),
)?;
assert_eq!(output_delta.thread_id, thread.id);
assert_eq!(output_delta.turn_id, turn.id);
assert_eq!(output_delta.item_id, "patch-call");
assert!(
!output_delta.delta.is_empty(),
"expected delta to be non-empty, got: {}",
output_delta.delta
);
let completed_file_change = timeout(DEFAULT_READ_TIMEOUT, async {
loop {
let completed_notif = mcp

View File

@@ -52,6 +52,7 @@ regex-lite = { workspace = true }
reqwest = { workspace = true, features = ["json", "stream"] }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
sha1 = { workspace = true }
sha2 = { workspace = true }
shlex = { workspace = true }

View File

@@ -51,6 +51,8 @@ pub enum Feature {
ShellTool,
/// Allow model to call multiple tools in parallel (only for models supporting it).
ParallelToolCalls,
/// Experimental skills injection (CLI flag-driven).
Skills,
}
impl Feature {
@@ -326,4 +328,10 @@ pub const FEATURES: &[FeatureSpec] = &[
stage: Stage::Stable,
default_enabled: true,
},
FeatureSpec {
id: Feature::Skills,
key: "skills",
stage: Stage::Experimental,
default_enabled: false,
},
];

View File

@@ -131,11 +131,15 @@ pub async fn recent_commits(cwd: &Path, limit: usize) -> Vec<CommitLogEntry> {
}
let fmt = "%H%x1f%ct%x1f%s"; // <sha> <US> <commit_time> <US> <subject>
let n = limit.max(1).to_string();
let Some(log_out) =
run_git_command_with_timeout(&["log", "-n", &n, &format!("--pretty=format:{fmt}")], cwd)
.await
else {
let limit_arg = (limit > 0).then(|| limit.to_string());
let mut args: Vec<String> = vec!["log".to_string()];
if let Some(n) = &limit_arg {
args.push("-n".to_string());
args.push(n.clone());
}
args.push(format!("--pretty=format:{fmt}"));
let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect();
let Some(log_out) = run_git_command_with_timeout(&arg_refs, cwd).await else {
return Vec::new();
};
if !log_out.status.success() {

View File

@@ -72,6 +72,7 @@ mod rollout;
pub(crate) mod safety;
pub mod seatbelt;
pub mod shell;
pub mod skills;
pub mod spawn;
pub mod terminal;
mod tools;

View File

@@ -18,6 +18,7 @@ use std::fs::File;
use std::fs::OpenOptions;
use std::io::Result;
use std::io::Write;
use std::path::Path;
use std::path::PathBuf;
use serde::Deserialize;
@@ -42,7 +43,7 @@ const HISTORY_FILENAME: &str = "history.jsonl";
const MAX_RETRIES: usize = 10;
const RETRY_SLEEP: Duration = Duration::from_millis(100);
#[derive(Serialize, Deserialize, Debug, Clone)]
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]
pub struct HistoryEntry {
pub session_id: String,
pub ts: u64,
@@ -142,23 +143,54 @@ pub(crate) async fn append_entry(
/// the current number of entries by counting newline characters.
pub(crate) async fn history_metadata(config: &Config) -> (u64, usize) {
let path = history_filepath(config);
history_metadata_for_file(&path).await
}
#[cfg(unix)]
let log_id = {
use std::os::unix::fs::MetadataExt;
// Obtain metadata (async) to get the identifier.
let meta = match fs::metadata(&path).await {
Ok(m) => m,
Err(e) if e.kind() == std::io::ErrorKind::NotFound => return (0, 0),
Err(_) => return (0, 0),
};
meta.ino()
/// Given a `log_id` (on Unix this is the file's inode number,
/// on Windows this is the file's creation time) and a zero-based
/// `offset`, return the corresponding `HistoryEntry` if the identifier matches
/// the current history file **and** the requested offset exists. Any I/O or
/// parsing errors are logged and result in `None`.
///
/// Note this function is not async because it uses a sync advisory file
/// locking API.
#[cfg(any(unix, windows))]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
let path = history_filepath(config);
lookup_history_entry(&path, log_id, offset)
}
/// On Unix systems, ensure the file permissions are `0o600` (rw-------). If the
/// permissions cannot be changed the error is propagated to the caller.
#[cfg(unix)]
async fn ensure_owner_only_permissions(file: &File) -> Result<()> {
let metadata = file.metadata()?;
let current_mode = metadata.permissions().mode() & 0o777;
if current_mode != 0o600 {
let mut perms = metadata.permissions();
perms.set_mode(0o600);
let perms_clone = perms.clone();
let file_clone = file.try_clone()?;
tokio::task::spawn_blocking(move || file_clone.set_permissions(perms_clone)).await??;
}
Ok(())
}
#[cfg(windows)]
// On Windows, simply succeed.
async fn ensure_owner_only_permissions(_file: &File) -> Result<()> {
Ok(())
}
async fn history_metadata_for_file(path: &Path) -> (u64, usize) {
let log_id = match fs::metadata(path).await {
Ok(metadata) => history_log_id(&metadata).unwrap_or(0),
Err(e) if e.kind() == std::io::ErrorKind::NotFound => return (0, 0),
Err(_) => return (0, 0),
};
#[cfg(not(unix))]
let log_id = 0u64;
// Open the file.
let mut file = match fs::File::open(&path).await {
let mut file = match fs::File::open(path).await {
Ok(f) => f,
Err(_) => return (log_id, 0),
};
@@ -179,21 +211,12 @@ pub(crate) async fn history_metadata(config: &Config) -> (u64, usize) {
(log_id, count)
}
/// Given a `log_id` (on Unix this is the file's inode number) and a zero-based
/// `offset`, return the corresponding `HistoryEntry` if the identifier matches
/// the current history file **and** the requested offset exists. Any I/O or
/// parsing errors are logged and result in `None`.
///
/// Note this function is not async because it uses a sync advisory file
/// locking API.
#[cfg(unix)]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
#[cfg(any(unix, windows))]
fn lookup_history_entry(path: &Path, log_id: u64, offset: usize) -> Option<HistoryEntry> {
use std::io::BufRead;
use std::io::BufReader;
use std::os::unix::fs::MetadataExt;
let path = history_filepath(config);
let file: File = match OpenOptions::new().read(true).open(&path) {
let file: File = match OpenOptions::new().read(true).open(path) {
Ok(f) => f,
Err(e) => {
tracing::warn!(error = %e, "failed to open history file");
@@ -209,7 +232,9 @@ pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<Hist
}
};
if metadata.ino() != log_id {
let current_log_id = history_log_id(&metadata)?;
if log_id != 0 && current_log_id != log_id {
return None;
}
@@ -256,31 +281,104 @@ pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<Hist
None
}
/// Fallback stub for non-Unix systems: currently always returns `None`.
#[cfg(not(unix))]
pub(crate) fn lookup(log_id: u64, offset: usize, config: &Config) -> Option<HistoryEntry> {
let _ = (log_id, offset, config);
None
}
/// On Unix systems ensure the file permissions are `0o600` (rw-------). If the
/// permissions cannot be changed the error is propagated to the caller.
#[cfg(unix)]
async fn ensure_owner_only_permissions(file: &File) -> Result<()> {
let metadata = file.metadata()?;
let current_mode = metadata.permissions().mode() & 0o777;
if current_mode != 0o600 {
let mut perms = metadata.permissions();
perms.set_mode(0o600);
let perms_clone = perms.clone();
let file_clone = file.try_clone()?;
tokio::task::spawn_blocking(move || file_clone.set_permissions(perms_clone)).await??;
fn history_log_id(metadata: &std::fs::Metadata) -> Option<u64> {
#[cfg(unix)]
{
use std::os::unix::fs::MetadataExt;
Some(metadata.ino())
}
#[cfg(windows)]
{
use std::os::windows::fs::MetadataExt;
Some(metadata.creation_time())
}
Ok(())
}
#[cfg(not(unix))]
async fn ensure_owner_only_permissions(_file: &File) -> Result<()> {
// For now, on non-Unix, simply succeed.
Ok(())
#[cfg(all(test, any(unix, windows)))]
mod tests {
use super::*;
use pretty_assertions::assert_eq;
use std::fs::File;
use std::io::Write;
use tempfile::TempDir;
#[tokio::test]
async fn lookup_reads_history_entries() {
let temp_dir = TempDir::new().expect("create temp dir");
let history_path = temp_dir.path().join(HISTORY_FILENAME);
let entries = vec![
HistoryEntry {
session_id: "first-session".to_string(),
ts: 1,
text: "first".to_string(),
},
HistoryEntry {
session_id: "second-session".to_string(),
ts: 2,
text: "second".to_string(),
},
];
let mut file = File::create(&history_path).expect("create history file");
for entry in &entries {
writeln!(
file,
"{}",
serde_json::to_string(entry).expect("serialize history entry")
)
.expect("write history entry");
}
let (log_id, count) = history_metadata_for_file(&history_path).await;
assert_eq!(count, entries.len());
let second_entry =
lookup_history_entry(&history_path, log_id, 1).expect("fetch second history entry");
assert_eq!(second_entry, entries[1]);
}
#[tokio::test]
async fn lookup_uses_stable_log_id_after_appends() {
let temp_dir = TempDir::new().expect("create temp dir");
let history_path = temp_dir.path().join(HISTORY_FILENAME);
let initial = HistoryEntry {
session_id: "first-session".to_string(),
ts: 1,
text: "first".to_string(),
};
let appended = HistoryEntry {
session_id: "second-session".to_string(),
ts: 2,
text: "second".to_string(),
};
let mut file = File::create(&history_path).expect("create history file");
writeln!(
file,
"{}",
serde_json::to_string(&initial).expect("serialize initial entry")
)
.expect("write initial entry");
let (log_id, count) = history_metadata_for_file(&history_path).await;
assert_eq!(count, 1);
let mut append = std::fs::OpenOptions::new()
.append(true)
.open(&history_path)
.expect("open history file for append");
writeln!(
append,
"{}",
serde_json::to_string(&appended).expect("serialize appended entry")
)
.expect("append history entry");
let fetched =
lookup_history_entry(&history_path, log_id, 1).expect("lookup appended history entry");
assert_eq!(fetched, appended);
}
}

View File

@@ -14,6 +14,9 @@
//! 3. We do **not** walk past the Git root.
use crate::config::Config;
use crate::features::Feature;
use crate::skills::load_skills;
use crate::skills::render_skills_section;
use dunce::canonicalize as normalize_path;
use std::path::PathBuf;
use tokio::io::AsyncReadExt;
@@ -31,18 +34,47 @@ const PROJECT_DOC_SEPARATOR: &str = "\n\n--- project-doc ---\n\n";
/// Combines `Config::instructions` and `AGENTS.md` (if present) into a single
/// string of instructions.
pub(crate) async fn get_user_instructions(config: &Config) -> Option<String> {
match read_project_docs(config).await {
Ok(Some(project_doc)) => match &config.user_instructions {
Some(original_instructions) => Some(format!(
"{original_instructions}{PROJECT_DOC_SEPARATOR}{project_doc}"
)),
None => Some(project_doc),
},
Ok(None) => config.user_instructions.clone(),
let skills_section = if config.features.enabled(Feature::Skills) {
let skills_outcome = load_skills(config);
for err in &skills_outcome.errors {
error!(
"failed to load skill {}: {}",
err.path.display(),
err.message
);
}
render_skills_section(&skills_outcome.skills)
} else {
None
};
let project_docs = match read_project_docs(config).await {
Ok(docs) => docs,
Err(e) => {
error!("error trying to find project doc: {e:#}");
config.user_instructions.clone()
return config.user_instructions.clone();
}
};
let combined_project_docs = merge_project_docs_with_skills(project_docs, skills_section);
let mut parts: Vec<String> = Vec::new();
if let Some(instructions) = config.user_instructions.clone() {
parts.push(instructions);
}
if let Some(project_doc) = combined_project_docs {
if !parts.is_empty() {
parts.push(PROJECT_DOC_SEPARATOR.to_string());
}
parts.push(project_doc);
}
if parts.is_empty() {
None
} else {
Some(parts.concat())
}
}
@@ -195,12 +227,25 @@ fn candidate_filenames<'a>(config: &'a Config) -> Vec<&'a str> {
names
}
fn merge_project_docs_with_skills(
project_doc: Option<String>,
skills_section: Option<String>,
) -> Option<String> {
match (project_doc, skills_section) {
(Some(doc), Some(skills)) => Some(format!("{doc}\n\n{skills}")),
(Some(doc), None) => Some(doc),
(None, Some(skills)) => Some(skills),
(None, None) => None,
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use std::fs;
use std::path::PathBuf;
use tempfile::TempDir;
/// Helper that returns a `Config` pointing at `root` and using `limit` as
@@ -219,6 +264,7 @@ mod tests {
config.cwd = root.path().to_path_buf();
config.project_doc_max_bytes = limit;
config.features.enable(Feature::Skills);
config.user_instructions = instructions.map(ToOwned::to_owned);
config
@@ -447,4 +493,58 @@ mod tests {
.eq(DEFAULT_PROJECT_DOC_FILENAME)
);
}
#[tokio::test]
async fn skills_are_appended_to_project_doc() {
let tmp = tempfile::tempdir().expect("tempdir");
fs::write(tmp.path().join("AGENTS.md"), "base doc").unwrap();
let cfg = make_config(&tmp, 4096, None);
create_skill(
cfg.codex_home.clone(),
"pdf-processing",
"extract from pdfs",
);
let res = get_user_instructions(&cfg)
.await
.expect("instructions expected");
let expected_path = dunce::canonicalize(
cfg.codex_home
.join("skills/pdf-processing/SKILL.md")
.as_path(),
)
.unwrap_or_else(|_| cfg.codex_home.join("skills/pdf-processing/SKILL.md"));
let expected_path_str = expected_path.to_string_lossy().replace('\\', "/");
let expected = format!(
"base doc\n\n## Skills\nThese skills are discovered at startup from ~/.codex/skills; each entry shows name, description, and file path so you can open the source for full instructions. Content is not inlined to keep context lean.\n- pdf-processing: extract from pdfs (file: {expected_path_str})"
);
assert_eq!(res, expected);
}
#[tokio::test]
async fn skills_render_without_project_doc() {
let tmp = tempfile::tempdir().expect("tempdir");
let cfg = make_config(&tmp, 4096, None);
create_skill(cfg.codex_home.clone(), "linting", "run clippy");
let res = get_user_instructions(&cfg)
.await
.expect("instructions expected");
let expected_path =
dunce::canonicalize(cfg.codex_home.join("skills/linting/SKILL.md").as_path())
.unwrap_or_else(|_| cfg.codex_home.join("skills/linting/SKILL.md"));
let expected_path_str = expected_path.to_string_lossy().replace('\\', "/");
let expected = format!(
"## Skills\nThese skills are discovered at startup from ~/.codex/skills; each entry shows name, description, and file path so you can open the source for full instructions. Content is not inlined to keep context lean.\n- linting: run clippy (file: {expected_path_str})"
);
assert_eq!(res, expected);
}
fn create_skill(codex_home: PathBuf, name: &str, description: &str) {
let skill_dir = codex_home.join(format!("skills/{name}"));
fs::create_dir_all(&skill_dir).unwrap();
let content = format!("---\nname: {name}\ndescription: {description}\n---\n\n# Body\n");
fs::write(skill_dir.join("SKILL.md"), content).unwrap();
}
}

View File

@@ -0,0 +1,291 @@
use crate::config::Config;
use crate::skills::model::SkillError;
use crate::skills::model::SkillLoadOutcome;
use crate::skills::model::SkillMetadata;
use dunce::canonicalize as normalize_path;
use serde::Deserialize;
use std::collections::VecDeque;
use std::error::Error;
use std::fmt;
use std::fs;
use std::path::Path;
use std::path::PathBuf;
use tracing::error;
#[derive(Debug, Deserialize)]
struct SkillFrontmatter {
name: String,
description: String,
}
const SKILLS_FILENAME: &str = "SKILL.md";
const SKILLS_DIR_NAME: &str = "skills";
const MAX_NAME_LEN: usize = 100;
const MAX_DESCRIPTION_LEN: usize = 500;
#[derive(Debug)]
enum SkillParseError {
Read(std::io::Error),
MissingFrontmatter,
InvalidYaml(serde_yaml::Error),
MissingField(&'static str),
InvalidField { field: &'static str, reason: String },
}
impl fmt::Display for SkillParseError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
SkillParseError::Read(e) => write!(f, "failed to read file: {e}"),
SkillParseError::MissingFrontmatter => {
write!(f, "missing YAML frontmatter delimited by ---")
}
SkillParseError::InvalidYaml(e) => write!(f, "invalid YAML: {e}"),
SkillParseError::MissingField(field) => write!(f, "missing field `{field}`"),
SkillParseError::InvalidField { field, reason } => {
write!(f, "invalid {field}: {reason}")
}
}
}
}
impl Error for SkillParseError {}
pub fn load_skills(config: &Config) -> SkillLoadOutcome {
let mut outcome = SkillLoadOutcome::default();
let roots = skill_roots(config);
for root in roots {
discover_skills_under_root(&root, &mut outcome);
}
outcome
.skills
.sort_by(|a, b| a.name.cmp(&b.name).then_with(|| a.path.cmp(&b.path)));
outcome
}
fn skill_roots(config: &Config) -> Vec<PathBuf> {
vec![config.codex_home.join(SKILLS_DIR_NAME)]
}
fn discover_skills_under_root(root: &Path, outcome: &mut SkillLoadOutcome) {
let Ok(root) = normalize_path(root) else {
return;
};
if !root.is_dir() {
return;
}
let mut queue: VecDeque<PathBuf> = VecDeque::from([root]);
while let Some(dir) = queue.pop_front() {
let entries = match fs::read_dir(&dir) {
Ok(entries) => entries,
Err(e) => {
error!("failed to read skills dir {}: {e:#}", dir.display());
continue;
}
};
for entry in entries.flatten() {
let path = entry.path();
let file_name = match path.file_name().and_then(|f| f.to_str()) {
Some(name) => name,
None => continue,
};
if file_name.starts_with('.') {
continue;
}
let Ok(file_type) = entry.file_type() else {
continue;
};
if file_type.is_symlink() {
continue;
}
if file_type.is_dir() {
queue.push_back(path);
continue;
}
if file_type.is_file() && file_name == SKILLS_FILENAME {
match parse_skill_file(&path) {
Ok(skill) => outcome.skills.push(skill),
Err(err) => outcome.errors.push(SkillError {
path,
message: err.to_string(),
}),
}
}
}
}
}
fn parse_skill_file(path: &Path) -> Result<SkillMetadata, SkillParseError> {
let contents = fs::read_to_string(path).map_err(SkillParseError::Read)?;
let frontmatter = extract_frontmatter(&contents).ok_or(SkillParseError::MissingFrontmatter)?;
let parsed: SkillFrontmatter =
serde_yaml::from_str(&frontmatter).map_err(SkillParseError::InvalidYaml)?;
let name = sanitize_single_line(&parsed.name);
let description = sanitize_single_line(&parsed.description);
validate_field(&name, MAX_NAME_LEN, "name")?;
validate_field(&description, MAX_DESCRIPTION_LEN, "description")?;
let resolved_path = normalize_path(path).unwrap_or_else(|_| path.to_path_buf());
Ok(SkillMetadata {
name,
description,
path: resolved_path,
})
}
fn sanitize_single_line(raw: &str) -> String {
raw.split_whitespace().collect::<Vec<_>>().join(" ")
}
fn validate_field(
value: &str,
max_len: usize,
field_name: &'static str,
) -> Result<(), SkillParseError> {
if value.is_empty() {
return Err(SkillParseError::MissingField(field_name));
}
if value.len() > max_len {
return Err(SkillParseError::InvalidField {
field: field_name,
reason: format!("exceeds maximum length of {max_len} characters"),
});
}
Ok(())
}
fn extract_frontmatter(contents: &str) -> Option<String> {
let mut lines = contents.lines();
if !matches!(lines.next(), Some(line) if line.trim() == "---") {
return None;
}
let mut frontmatter_lines: Vec<&str> = Vec::new();
let mut found_closing = false;
for line in lines.by_ref() {
if line.trim() == "---" {
found_closing = true;
break;
}
frontmatter_lines.push(line);
}
if frontmatter_lines.is_empty() || !found_closing {
return None;
}
Some(frontmatter_lines.join("\n"))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use tempfile::TempDir;
fn make_config(codex_home: &TempDir) -> Config {
let mut config = Config::load_from_base_config_with_overrides(
ConfigToml::default(),
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)
.expect("defaults for test should always succeed");
config.cwd = codex_home.path().to_path_buf();
config
}
fn write_skill(codex_home: &TempDir, dir: &str, name: &str, description: &str) -> PathBuf {
let skill_dir = codex_home.path().join(format!("skills/{dir}"));
fs::create_dir_all(&skill_dir).unwrap();
let indented_description = description.replace('\n', "\n ");
let content = format!(
"---\nname: {name}\ndescription: |-\n {indented_description}\n---\n\n# Body\n"
);
let path = skill_dir.join(SKILLS_FILENAME);
fs::write(&path, content).unwrap();
path
}
#[test]
fn loads_valid_skill() {
let codex_home = tempfile::tempdir().expect("tempdir");
write_skill(&codex_home, "demo", "demo-skill", "does things\ncarefully");
let cfg = make_config(&codex_home);
let outcome = load_skills(&cfg);
assert!(
outcome.errors.is_empty(),
"unexpected errors: {:?}",
outcome.errors
);
assert_eq!(outcome.skills.len(), 1);
let skill = &outcome.skills[0];
assert_eq!(skill.name, "demo-skill");
assert_eq!(skill.description, "does things carefully");
let path_str = skill.path.to_string_lossy().replace('\\', "/");
assert!(
path_str.ends_with("skills/demo/SKILL.md"),
"unexpected path {path_str}"
);
}
#[test]
fn skips_hidden_and_invalid() {
let codex_home = tempfile::tempdir().expect("tempdir");
let hidden_dir = codex_home.path().join("skills/.hidden");
fs::create_dir_all(&hidden_dir).unwrap();
fs::write(
hidden_dir.join(SKILLS_FILENAME),
"---\nname: hidden\ndescription: hidden\n---\n",
)
.unwrap();
// Invalid because missing closing frontmatter.
let invalid_dir = codex_home.path().join("skills/invalid");
fs::create_dir_all(&invalid_dir).unwrap();
fs::write(invalid_dir.join(SKILLS_FILENAME), "---\nname: bad").unwrap();
let cfg = make_config(&codex_home);
let outcome = load_skills(&cfg);
assert_eq!(outcome.skills.len(), 0);
assert_eq!(outcome.errors.len(), 1);
assert!(
outcome.errors[0]
.message
.contains("missing YAML frontmatter"),
"expected frontmatter error"
);
}
#[test]
fn enforces_length_limits() {
let codex_home = tempfile::tempdir().expect("tempdir");
let long_desc = "a".repeat(MAX_DESCRIPTION_LEN + 1);
write_skill(&codex_home, "too-long", "toolong", &long_desc);
let cfg = make_config(&codex_home);
let outcome = load_skills(&cfg);
assert_eq!(outcome.skills.len(), 0);
assert_eq!(outcome.errors.len(), 1);
assert!(
outcome.errors[0].message.contains("invalid description"),
"expected length error"
);
}
}

View File

@@ -0,0 +1,9 @@
pub mod loader;
pub mod model;
pub mod render;
pub use loader::load_skills;
pub use model::SkillError;
pub use model::SkillLoadOutcome;
pub use model::SkillMetadata;
pub use render::render_skills_section;

View File

@@ -0,0 +1,20 @@
use std::path::PathBuf;
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SkillMetadata {
pub name: String,
pub description: String,
pub path: PathBuf,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SkillError {
pub path: PathBuf,
pub message: String,
}
#[derive(Debug, Clone, Default)]
pub struct SkillLoadOutcome {
pub skills: Vec<SkillMetadata>,
pub errors: Vec<SkillError>,
}

View File

@@ -0,0 +1,21 @@
use crate::skills::model::SkillMetadata;
pub fn render_skills_section(skills: &[SkillMetadata]) -> Option<String> {
if skills.is_empty() {
return None;
}
let mut lines: Vec<String> = Vec::new();
lines.push("## Skills".to_string());
lines.push("These skills are discovered at startup from ~/.codex/skills; each entry shows name, description, and file path so you can open the source for full instructions. Content is not inlined to keep context lean.".to_string());
for skill in skills {
let path_str = skill.path.to_string_lossy().replace('\\', "/");
lines.push(format!(
"- {}: {} (file: {})",
skill.name, skill.description, path_str
));
}
Some(lines.join("\n"))
}

View File

@@ -431,6 +431,9 @@ pub fn ev_apply_patch_call(
ApplyPatchModelOutput::ShellViaHeredoc => {
ev_apply_patch_shell_call_via_heredoc(call_id, patch)
}
ApplyPatchModelOutput::ShellCommandViaHeredoc => {
ev_apply_patch_shell_command_call_via_heredoc(call_id, patch)
}
}
}
@@ -492,6 +495,13 @@ pub fn ev_apply_patch_shell_call_via_heredoc(call_id: &str, patch: &str) -> Valu
ev_function_call(call_id, "shell", &arguments)
}
pub fn ev_apply_patch_shell_command_call_via_heredoc(call_id: &str, patch: &str) -> Value {
let args = serde_json::json!({ "command": format!("apply_patch <<'EOF'\n{patch}\nEOF\n") });
let arguments = serde_json::to_string(&args).expect("serialize apply_patch arguments");
ev_function_call(call_id, "shell_command", &arguments)
}
pub fn sse_failed(id: &str, code: &str, message: &str) -> String {
sse(vec![serde_json::json!({
"type": "response.failed",

View File

@@ -36,6 +36,7 @@ pub enum ApplyPatchModelOutput {
Function,
Shell,
ShellViaHeredoc,
ShellCommandViaHeredoc,
}
/// A collection of different ways the model can output an apply_patch call
@@ -312,7 +313,10 @@ impl TestCodexHarness {
ApplyPatchModelOutput::Freeform => self.custom_tool_call_output(call_id).await,
ApplyPatchModelOutput::Function
| ApplyPatchModelOutput::Shell
| ApplyPatchModelOutput::ShellViaHeredoc => self.function_call_stdout(call_id).await,
| ApplyPatchModelOutput::ShellViaHeredoc
| ApplyPatchModelOutput::ShellCommandViaHeredoc => {
self.function_call_stdout(call_id).await
}
}
}
}

View File

@@ -2,6 +2,7 @@
use anyhow::Result;
use core_test_support::responses::ev_apply_patch_call;
use core_test_support::responses::ev_shell_command_call;
use core_test_support::test_codex::ApplyPatchModelOutput;
use pretty_assertions::assert_eq;
use std::fs;
@@ -127,6 +128,7 @@ D delete.txt
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_multiple_chunks(model_output: ApplyPatchModelOutput) -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -153,6 +155,7 @@ async fn apply_patch_cli_multiple_chunks(model_output: ApplyPatchModelOutput) ->
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_moves_file_to_new_directory(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -181,6 +184,7 @@ async fn apply_patch_cli_moves_file_to_new_directory(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_updates_file_appends_trailing_newline(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -208,6 +212,7 @@ async fn apply_patch_cli_updates_file_appends_trailing_newline(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_insert_only_hunk_modifies_file(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -233,6 +238,7 @@ async fn apply_patch_cli_insert_only_hunk_modifies_file(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_move_overwrites_existing_destination(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -263,6 +269,7 @@ async fn apply_patch_cli_move_overwrites_existing_destination(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_move_without_content_change_has_no_turn_diff(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -320,6 +327,7 @@ async fn apply_patch_cli_move_without_content_change_has_no_turn_diff(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_add_overwrites_existing_file(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -345,6 +353,7 @@ async fn apply_patch_cli_add_overwrites_existing_file(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_rejects_invalid_hunk_header(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -376,6 +385,7 @@ async fn apply_patch_cli_rejects_invalid_hunk_header(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_reports_missing_context(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -409,6 +419,7 @@ async fn apply_patch_cli_reports_missing_context(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_reports_missing_target_file(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -444,6 +455,7 @@ async fn apply_patch_cli_reports_missing_target_file(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_delete_missing_file_reports_error(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -480,6 +492,7 @@ async fn apply_patch_cli_delete_missing_file_reports_error(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_rejects_empty_patch(model_output: ApplyPatchModelOutput) -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -504,6 +517,7 @@ async fn apply_patch_cli_rejects_empty_patch(model_output: ApplyPatchModelOutput
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_delete_directory_reports_verification_error(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -530,6 +544,7 @@ async fn apply_patch_cli_delete_directory_reports_verification_error(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_rejects_path_traversal_outside_workspace(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -582,6 +597,7 @@ async fn apply_patch_cli_rejects_path_traversal_outside_workspace(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_rejects_move_path_traversal_outside_workspace(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -635,6 +651,7 @@ async fn apply_patch_cli_rejects_move_path_traversal_outside_workspace(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_verification_failure_has_no_side_effects(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -677,11 +694,10 @@ async fn apply_patch_shell_command_heredoc_with_cd_updates_relative_workdir() ->
let script = "cd sub && apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: in_sub.txt\n@@\n-before\n+after\n*** End Patch\nEOF\n";
let call_id = "shell-heredoc-cd";
let args = json!({ "command": script, "timeout_ms": 5_000 });
let bodies = vec![
sse(vec![
ev_response_created("resp-1"),
ev_function_call(call_id, "shell_command", &serde_json::to_string(&args)?),
ev_shell_command_call(call_id, script),
ev_completed("resp-1"),
]),
sse(vec![
@@ -702,6 +718,86 @@ async fn apply_patch_shell_command_heredoc_with_cd_updates_relative_workdir() ->
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_shell_command_heredoc_with_cd_emits_turn_diff() -> Result<()> {
skip_if_no_network!(Ok(()));
let harness = apply_patch_harness_with(|builder| builder.with_model("gpt-5.1")).await?;
let test = harness.test();
let codex = test.codex.clone();
let cwd = test.cwd.clone();
// Prepare a file inside a subdir; update it via cd && apply_patch heredoc form.
let sub = test.workspace_path("sub");
fs::create_dir_all(&sub)?;
let target = sub.join("in_sub.txt");
fs::write(&target, "before\n")?;
let script = "cd sub && apply_patch <<'EOF'\n*** Begin Patch\n*** Update File: in_sub.txt\n@@\n-before\n+after\n*** End Patch\nEOF\n";
let call_id = "shell-heredoc-cd";
let args = json!({ "command": script, "timeout_ms": 5_000 });
let bodies = vec![
sse(vec![
ev_response_created("resp-1"),
ev_function_call(call_id, "shell_command", &serde_json::to_string(&args)?),
ev_completed("resp-1"),
]),
sse(vec![
ev_assistant_message("msg-1", "ok"),
ev_completed("resp-2"),
]),
];
mount_sse_sequence(harness.server(), bodies).await;
let model = test.session_configured.model.clone();
codex
.submit(Op::UserTurn {
items: vec![UserInput::Text {
text: "apply via shell heredoc with cd".into(),
}],
final_output_json_schema: None,
cwd: cwd.path().to_path_buf(),
approval_policy: AskForApproval::Never,
sandbox_policy: SandboxPolicy::DangerFullAccess,
model,
effort: None,
summary: ReasoningSummary::Auto,
})
.await?;
let mut saw_turn_diff = None;
let mut saw_patch_begin = false;
let mut patch_end_success = None;
wait_for_event(&codex, |event| match event {
EventMsg::PatchApplyBegin(begin) => {
saw_patch_begin = true;
assert_eq!(begin.call_id, call_id);
false
}
EventMsg::PatchApplyEnd(end) => {
assert_eq!(end.call_id, call_id);
patch_end_success = Some(end.success);
false
}
EventMsg::TurnDiff(ev) => {
saw_turn_diff = Some(ev.unified_diff.clone());
false
}
EventMsg::TaskComplete(_) => true,
_ => false,
})
.await;
assert!(saw_patch_begin, "expected PatchApplyBegin event");
let patch_end_success =
patch_end_success.expect("expected PatchApplyEnd event to capture success flag");
assert!(patch_end_success);
let diff = saw_turn_diff.expect("expected TurnDiff event");
assert!(diff.contains("diff --git"), "diff header missing: {diff:?}");
Ok(())
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_shell_command_failure_propagates_error_and_skips_diff() -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -776,7 +872,11 @@ async fn apply_patch_shell_command_failure_propagates_error_and_skips_diff() ->
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn apply_patch_function_accepts_lenient_heredoc_wrapped_patch() -> Result<()> {
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_function_accepts_lenient_heredoc_wrapped_patch(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
skip_if_no_network!(Ok(()));
let harness = apply_patch_harness().await?;
@@ -784,16 +884,8 @@ async fn apply_patch_function_accepts_lenient_heredoc_wrapped_patch() -> Result<
let file_name = "lenient.txt";
let patch_inner =
format!("*** Begin Patch\n*** Add File: {file_name}\n+lenient\n*** End Patch\n");
let wrapped = format!("<<'EOF'\n{patch_inner}EOF\n");
let call_id = "apply-lenient";
mount_apply_patch(
&harness,
call_id,
wrapped.as_str(),
"ok",
ApplyPatchModelOutput::Function,
)
.await;
mount_apply_patch(&harness, call_id, patch_inner.as_str(), "ok", model_output).await;
harness.submit("apply lenient heredoc patch").await?;
@@ -807,6 +899,7 @@ async fn apply_patch_function_accepts_lenient_heredoc_wrapped_patch() -> Result<
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_end_of_file_anchor(model_output: ApplyPatchModelOutput) -> Result<()> {
skip_if_no_network!(Ok(()));
@@ -829,6 +922,7 @@ async fn apply_patch_cli_end_of_file_anchor(model_output: ApplyPatchModelOutput)
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_cli_missing_second_chunk_context_rejected(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -863,6 +957,7 @@ async fn apply_patch_cli_missing_second_chunk_context_rejected(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_emits_turn_diff_event_with_unified_diff(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -918,6 +1013,7 @@ async fn apply_patch_emits_turn_diff_event_with_unified_diff(
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_turn_diff_for_rename_with_content_change(
model_output: ApplyPatchModelOutput,
) -> Result<()> {
@@ -1132,6 +1228,7 @@ async fn apply_patch_aggregates_diff_preserves_success_after_failure() -> Result
#[test_case(ApplyPatchModelOutput::Function)]
#[test_case(ApplyPatchModelOutput::Shell)]
#[test_case(ApplyPatchModelOutput::ShellViaHeredoc)]
#[test_case(ApplyPatchModelOutput::ShellCommandViaHeredoc)]
async fn apply_patch_change_context_disambiguates_target(
model_output: ApplyPatchModelOutput,
) -> Result<()> {

View File

@@ -15,6 +15,7 @@ use codex_core::WireApi;
use codex_core::auth::AuthCredentialsStoreMode;
use codex_core::built_in_model_providers;
use codex_core::error::CodexErr;
use codex_core::features::Feature;
use codex_core::model_family::find_family_for_model;
use codex_core::protocol::EventMsg;
use codex_core::protocol::Op;
@@ -34,6 +35,7 @@ use core_test_support::skip_if_no_network;
use core_test_support::test_codex::TestCodex;
use core_test_support::test_codex::test_codex;
use core_test_support::wait_for_event;
use dunce::canonicalize as normalize_path;
use futures::StreamExt;
use serde_json::json;
use std::io::Write;
@@ -620,6 +622,74 @@ async fn includes_user_instructions_message_in_request() {
assert_message_ends_with(&request_body["input"][1], "</environment_context>");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn skills_append_to_instructions_when_feature_enabled() {
skip_if_no_network!();
let server = MockServer::start().await;
let resp_mock = responses::mount_sse_once(&server, sse_completed("resp1")).await;
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let codex_home = TempDir::new().unwrap();
let skill_dir = codex_home.path().join("skills/demo");
std::fs::create_dir_all(&skill_dir).expect("create skill dir");
std::fs::write(
skill_dir.join("SKILL.md"),
"---\nname: demo\ndescription: build charts\n---\n\n# body\n",
)
.expect("write skill");
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.features.enable(Feature::Skills);
config.cwd = codex_home.path().to_path_buf();
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let codex = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation")
.conversation;
codex
.submit(Op::UserInput {
items: vec![UserInput::Text {
text: "hello".into(),
}],
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let request = resp_mock.single_request();
let request_body = request.body_json();
assert_message_role(&request_body["input"][0], "user");
let instructions_text = request_body["input"][0]["content"][0]["text"]
.as_str()
.expect("instructions text");
assert!(
instructions_text.contains("## Skills"),
"expected skills section present"
);
assert!(
instructions_text.contains("demo: build charts"),
"expected skill summary"
);
let expected_path = normalize_path(skill_dir.join("SKILL.md")).unwrap();
let expected_path_str = expected_path.to_string_lossy().replace('\\', "/");
assert!(
instructions_text.contains(&expected_path_str),
"expected path {expected_path_str} in instructions"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn includes_configured_effort_in_request() -> anyhow::Result<()> {
skip_if_no_network!(Ok(()));

View File

@@ -22,6 +22,7 @@ rustPlatform.buildRustPackage (_: {
cargoLock.outputHashes = {
"ratatui-0.29.0" = "sha256-HBvT5c8GsiCxMffNjJGLmHnvG77A6cqEL+1ARurBXho=";
"crossterm-0.28.1" = "sha256-6qCtfSMuXACKFb9ATID39XyFDIEMFDmbx6SSmNe+728=";
"rmcp-0.9.0" = "sha256-0iPrpf0Ha/facO3p5e0hUKHBqGp/iS+C+OdS+pRKMOU=";
};
meta = with lib; {

View File

@@ -38,6 +38,13 @@ SERVER_NOTIFICATION_TYPE_NAMES: list[str] = []
# order to compile without warnings.
LARGE_ENUMS = {"ServerResult"}
# some types need setting a default value for `r#type`
# ref: [#7417](https://github.com/openai/codex/pull/7417)
default_type_values: dict[str, str] = {
"ToolInputSchema": "object",
"ToolOutputSchema": "object",
}
def main() -> int:
parser = argparse.ArgumentParser(
@@ -351,6 +358,14 @@ class StructField:
out.append(f" pub {self.name}: {self.type_name},\n")
def append_serde_attr(existing: str | None, fragment: str) -> str:
if existing is None:
return f"#[serde({fragment})]"
assert existing.startswith("#[serde(") and existing.endswith(")]"), existing
body = existing[len("#[serde(") : -2]
return f"#[serde({body}, {fragment})]"
def define_struct(
name: str,
properties: dict[str, Any],
@@ -359,6 +374,14 @@ def define_struct(
) -> list[str]:
out: list[str] = []
type_default_fn: str | None = None
if name in default_type_values:
snake_name = to_snake_case(name) or name
type_default_fn = f"{snake_name}_type_default_str"
out.append(f"fn {type_default_fn}() -> String {{\n")
out.append(f' "{default_type_values[name]}".to_string()\n')
out.append("}\n\n")
fields: list[StructField] = []
for prop_name, prop in properties.items():
if prop_name == "_meta":
@@ -380,6 +403,10 @@ def define_struct(
if is_optional:
prop_type = f"Option<{prop_type}>"
rs_prop = rust_prop_name(prop_name, is_optional)
if prop_name == "type" and type_default_fn:
rs_prop.serde = append_serde_attr(rs_prop.serde, f'default = "{type_default_fn}"')
if prop_type.startswith("&'static str"):
fields.append(StructField("const", rs_prop.name, prop_type, rs_prop.serde, rs_prop.ts))
else:

View File

@@ -1474,6 +1474,10 @@ pub struct Tool {
pub title: Option<String>,
}
fn tool_output_schema_type_default_str() -> String {
"object".to_string()
}
/// An optional JSON Schema object defining the structure of the tool's output returned in
/// the structuredContent field of a CallToolResult.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
@@ -1484,9 +1488,14 @@ pub struct ToolOutputSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<Vec<String>>,
#[serde(default = "tool_output_schema_type_default_str")]
pub r#type: String, // &'static str = "object"
}
fn tool_input_schema_type_default_str() -> String {
"object".to_string()
}
/// A JSON Schema object defining the expected parameters for the tool.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize, JsonSchema, TS)]
pub struct ToolInputSchema {
@@ -1496,6 +1505,7 @@ pub struct ToolInputSchema {
#[serde(default, skip_serializing_if = "Option::is_none")]
#[ts(optional)]
pub required: Option<Vec<String>>,
#[serde(default = "tool_input_schema_type_default_str")]
pub r#type: String, // &'static str = "object"
}

View File

@@ -14,6 +14,8 @@ use crate::pager_overlay::Overlay;
use crate::render::highlight::highlight_bash_to_lines;
use crate::render::renderable::Renderable;
use crate::resume_picker::ResumeSelection;
use crate::skill_error_prompt::SkillErrorPromptOutcome;
use crate::skill_error_prompt::run_skill_error_prompt;
use crate::tui;
use crate::tui::TuiEvent;
use crate::update_action::UpdateAction;
@@ -36,6 +38,7 @@ use codex_core::protocol::Op;
use codex_core::protocol::SessionSource;
use codex_core::protocol::TokenUsage;
use codex_core::protocol_config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_core::skills::load_skills;
use codex_protocol::ConversationId;
use color_eyre::eyre::Result;
use color_eyre::eyre::WrapErr;
@@ -267,6 +270,20 @@ impl App {
SessionSource::Cli,
));
let skills_outcome = load_skills(&config);
if !skills_outcome.errors.is_empty() {
match run_skill_error_prompt(tui, &skills_outcome.errors).await {
SkillErrorPromptOutcome::Exit => {
return Ok(AppExitInfo {
token_usage: TokenUsage::default(),
conversation_id: None,
update_action: None,
});
}
SkillErrorPromptOutcome::Continue => {}
}
}
let enhanced_keys_supported = tui.enhanced_keys_supported();
let mut chat_widget = match resume_selection {

View File

@@ -181,6 +181,14 @@ pub fn normalize_pasted_path(pasted: &str) -> Option<PathBuf> {
drive || unc
};
if looks_like_windows_path {
#[cfg(target_os = "linux")]
{
if is_probably_wsl()
&& let Some(converted) = convert_windows_path_to_wsl(pasted)
{
return Some(converted);
}
}
return Some(PathBuf::from(pasted));
}
@@ -193,6 +201,41 @@ pub fn normalize_pasted_path(pasted: &str) -> Option<PathBuf> {
None
}
#[cfg(target_os = "linux")]
fn is_probably_wsl() -> bool {
std::env::var_os("WSL_DISTRO_NAME").is_some()
|| std::env::var_os("WSL_INTEROP").is_some()
|| std::env::var_os("WSLENV").is_some()
}
#[cfg(target_os = "linux")]
fn convert_windows_path_to_wsl(input: &str) -> Option<PathBuf> {
if input.starts_with("\\\\") {
return None;
}
let drive_letter = input.chars().next()?.to_ascii_lowercase();
if !drive_letter.is_ascii_lowercase() {
return None;
}
if input.get(1..2) != Some(":") {
return None;
}
let mut result = PathBuf::from(format!("/mnt/{drive_letter}"));
for component in input
.get(2..)?
.trim_start_matches(['\\', '/'])
.split(['\\', '/'])
.filter(|component| !component.is_empty())
{
result.push(component);
}
Some(result)
}
/// Infer an image format for the provided path based on its extension.
pub fn pasted_image_format(path: &Path) -> EncodedImageFormat {
match path
@@ -210,6 +253,40 @@ pub fn pasted_image_format(path: &Path) -> EncodedImageFormat {
#[cfg(test)]
mod pasted_paths_tests {
use super::*;
#[cfg(target_os = "linux")]
use std::ffi::OsString;
#[cfg(target_os = "linux")]
struct EnvVarGuard {
key: &'static str,
original: Option<OsString>,
}
#[cfg(target_os = "linux")]
impl EnvVarGuard {
fn set(key: &'static str, value: &str) -> Self {
let original = std::env::var_os(key);
unsafe {
std::env::set_var(key, value);
}
Self { key, original }
}
}
#[cfg(target_os = "linux")]
impl Drop for EnvVarGuard {
fn drop(&mut self) {
if let Some(original) = &self.original {
unsafe {
std::env::set_var(self.key, original);
}
} else {
unsafe {
std::env::remove_var(self.key);
}
}
}
}
#[cfg(not(windows))]
#[test]
@@ -223,7 +300,17 @@ mod pasted_paths_tests {
fn normalize_file_url_windows() {
let input = r"C:\Temp\example.png";
let result = normalize_pasted_path(input).expect("should parse file URL");
assert_eq!(result, PathBuf::from(r"C:\Temp\example.png"));
#[cfg(target_os = "linux")]
let expected = if is_probably_wsl()
&& let Some(converted) = convert_windows_path_to_wsl(input)
{
converted
} else {
PathBuf::from(r"C:\Temp\example.png")
};
#[cfg(not(target_os = "linux"))]
let expected = PathBuf::from(r"C:\Temp\example.png");
assert_eq!(result, expected);
}
#[test]
@@ -291,10 +378,17 @@ mod pasted_paths_tests {
fn normalize_unquoted_windows_path_with_spaces() {
let input = r"C:\\Users\\Alice\\My Pictures\\example image.png";
let result = normalize_pasted_path(input).expect("should accept unquoted windows path");
assert_eq!(
result,
#[cfg(target_os = "linux")]
let expected = if is_probably_wsl()
&& let Some(converted) = convert_windows_path_to_wsl(input)
{
converted
} else {
PathBuf::from(r"C:\\Users\\Alice\\My Pictures\\example image.png")
);
};
#[cfg(not(target_os = "linux"))]
let expected = PathBuf::from(r"C:\\Users\\Alice\\My Pictures\\example image.png");
assert_eq!(result, expected);
}
#[test]
@@ -322,4 +416,16 @@ mod pasted_paths_tests {
EncodedImageFormat::Other
);
}
#[cfg(target_os = "linux")]
#[test]
fn normalize_windows_path_in_wsl() {
let _guard = EnvVarGuard::set("WSL_DISTRO_NAME", "Ubuntu-24.04");
let input = r"C:\\Users\\Alice\\Pictures\\example image.png";
let result = normalize_pasted_path(input).expect("should convert windows path on wsl");
assert_eq!(
result,
PathBuf::from("/mnt/c/Users/Alice/Pictures/example image.png")
);
}
}

View File

@@ -67,6 +67,7 @@ mod resume_picker;
mod selection_list;
mod session_log;
mod shimmer;
mod skill_error_prompt;
mod slash_command;
mod status;
mod status_indicator_widget;

View File

@@ -0,0 +1,164 @@
use crate::tui::FrameRequester;
use crate::tui::Tui;
use crate::tui::TuiEvent;
use codex_core::skills::SkillError;
use crossterm::event::KeyCode;
use crossterm::event::KeyEvent;
use crossterm::event::KeyEventKind;
use crossterm::event::KeyModifiers;
use ratatui::buffer::Buffer;
use ratatui::layout::Rect;
use ratatui::prelude::Stylize as _;
use ratatui::text::Line;
use ratatui::widgets::Block;
use ratatui::widgets::Borders;
use ratatui::widgets::Clear;
use ratatui::widgets::Paragraph;
use ratatui::widgets::Widget;
use ratatui::widgets::WidgetRef;
use ratatui::widgets::Wrap;
use tokio_stream::StreamExt;
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub(crate) enum SkillErrorPromptOutcome {
Continue,
Exit,
}
pub(crate) async fn run_skill_error_prompt(
tui: &mut Tui,
errors: &[SkillError],
) -> SkillErrorPromptOutcome {
struct AltScreenGuard<'a> {
tui: &'a mut Tui,
}
impl<'a> AltScreenGuard<'a> {
fn enter(tui: &'a mut Tui) -> Self {
let _ = tui.enter_alt_screen();
Self { tui }
}
}
impl Drop for AltScreenGuard<'_> {
fn drop(&mut self) {
let _ = self.tui.leave_alt_screen();
}
}
let alt = AltScreenGuard::enter(tui);
let mut screen = SkillErrorScreen::new(alt.tui.frame_requester(), errors);
let _ = alt.tui.draw(u16::MAX, |frame| {
frame.render_widget_ref(&screen, frame.area());
});
let events = alt.tui.event_stream();
tokio::pin!(events);
while !screen.is_done() {
if let Some(event) = events.next().await {
match event {
TuiEvent::Key(key_event) => screen.handle_key(key_event),
TuiEvent::Paste(_) => {}
TuiEvent::Draw => {
let _ = alt.tui.draw(u16::MAX, |frame| {
frame.render_widget_ref(&screen, frame.area());
});
}
}
} else {
screen.confirm_continue();
break;
}
}
screen.outcome()
}
struct SkillErrorScreen {
request_frame: FrameRequester,
lines: Vec<Line<'static>>,
done: bool,
exit: bool,
}
impl SkillErrorScreen {
fn new(request_frame: FrameRequester, errors: &[SkillError]) -> Self {
let mut lines: Vec<Line<'static>> = Vec::new();
lines.push(Line::from("Skill validation errors detected".bold()));
lines.push(Line::from(
"Fix these SKILL.md files and restart. Invalid skills are ignored until resolved. Press enter or esc to continue, Ctrl+C or Ctrl+D to exit.",
));
lines.push(Line::from(""));
for error in errors {
let message = format!("- {}: {}", error.path.display(), error.message);
lines.push(Line::from(message));
}
Self {
request_frame,
lines,
done: false,
exit: false,
}
}
fn is_done(&self) -> bool {
self.done
}
fn confirm_continue(&mut self) {
self.done = true;
self.exit = false;
self.request_frame.schedule_frame();
}
fn confirm_exit(&mut self) {
self.done = true;
self.exit = true;
self.request_frame.schedule_frame();
}
fn outcome(&self) -> SkillErrorPromptOutcome {
if self.exit {
SkillErrorPromptOutcome::Exit
} else {
SkillErrorPromptOutcome::Continue
}
}
fn handle_key(&mut self, key_event: KeyEvent) {
if key_event.kind == KeyEventKind::Release {
return;
}
if key_event
.modifiers
.intersects(KeyModifiers::CONTROL | KeyModifiers::META)
&& matches!(key_event.code, KeyCode::Char('c') | KeyCode::Char('d'))
{
self.confirm_exit();
return;
}
match key_event.code {
KeyCode::Enter | KeyCode::Esc | KeyCode::Char(' ') | KeyCode::Char('q') => {
self.confirm_continue();
}
_ => {}
}
}
}
impl WidgetRef for &SkillErrorScreen {
fn render_ref(&self, area: Rect, buf: &mut Buffer) {
Clear.render(area, buf);
let block = Block::default()
.title("Skill errors".bold())
.borders(Borders::ALL);
Paragraph::new(self.lines.clone())
.block(block)
.wrap(Wrap { trim: true })
.render(area, buf);
}
}

View File

@@ -8,11 +8,35 @@ version.workspace = true
name = "codex_windows_sandbox"
path = "src/lib.rs"
[[bin]]
name = "codex-windows-sandbox-setup"
path = "src/bin/setup.rs"
[[bin]]
name = "codex-command-runner"
path = "src/bin/command_runner.rs"
[[bin]]
name = "runner-smoke"
path = "src/bin/runner_smoke.rs"
[[bin]]
name = "runner-stub"
path = "src/bin/runner_stub.rs"
[dependencies]
anyhow = "1.0"
base64 = { workspace = true }
dunce = "1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
chrono = { version = "0.4", default-features = false, features = ["clock", "std"] }
windows = { version = "0.58", features = [
"Win32_Foundation",
"Win32_NetworkManagement_WindowsFirewall",
"Win32_System_Com",
"Win32_System_Variant",
] }
[dependencies.codex-protocol]
package = "codex-protocol"
path = "../protocol"
@@ -40,10 +64,14 @@ features = [
"Win32_System_Console",
"Win32_Storage_FileSystem",
"Win32_System_Diagnostics_ToolHelp",
"Win32_NetworkManagement_NetManagement",
"Win32_Networking_WinSock",
"Win32_System_LibraryLoader",
"Win32_System_Com",
"Win32_Security_Cryptography",
"Win32_Security_Authentication_Identity",
"Win32_UI_Shell",
"Win32_System_Registry",
]
version = "0.52"
[dev-dependencies]

View File

@@ -160,6 +160,7 @@ pub unsafe fn dacl_effective_allows_write(p_dacl: *mut ACL, psid: *mut c_void) -
// Fallback: simple allow ACE scan (already ignores inherit-only)
dacl_has_write_allow_for_sid(p_dacl, psid)
}
#[allow(clippy::missing_safety_doc)]
pub unsafe fn add_allow_ace(path: &Path, psid: *mut c_void) -> Result<bool> {
let mut p_sd: *mut c_void = std::ptr::null_mut();
let mut p_dacl: *mut ACL = std::ptr::null_mut();
@@ -217,6 +218,10 @@ pub unsafe fn add_allow_ace(path: &Path, psid: *mut c_void) -> Result<bool> {
Ok(added)
}
/// Adds a deny ACE to prevent write/append/delete for the given SID on the target path.
///
/// # Safety
/// Caller must ensure `psid` points to a valid SID and `path` refers to an existing file or directory.
pub unsafe fn add_deny_write_ace(path: &Path, psid: *mut c_void) -> Result<bool> {
let mut p_sd: *mut c_void = std::ptr::null_mut();
let mut p_dacl: *mut ACL = std::ptr::null_mut();
@@ -330,6 +335,10 @@ pub unsafe fn revoke_ace(path: &Path, psid: *mut c_void) {
}
}
/// Grants RX to the null device for the given SID to support stdout/stderr redirection.
///
/// # Safety
/// Caller must ensure `psid` is a valid SID pointer.
pub unsafe fn allow_null_device(psid: *mut c_void) {
let desired = 0x00020000 | 0x00040000; // READ_CONTROL | WRITE_DAC
let h = CreateFileW(

View File

@@ -0,0 +1,219 @@
use anyhow::{Context, Result};
use codex_windows_sandbox::{
allow_null_device, cap_sid_file, convert_string_sid_to_sid, create_process_as_user,
create_readonly_token_with_cap_from, create_workspace_write_token_with_cap_from,
get_current_token_for_restriction, load_or_create_cap_sids, log_note, parse_policy, to_wide,
SandboxPolicy,
};
use serde::Deserialize;
use std::collections::HashMap;
use std::ffi::c_void;
use std::fs;
use std::io::Read;
use std::path::PathBuf;
use windows_sys::Win32::Foundation::{CloseHandle, GetLastError, HANDLE};
use windows_sys::Win32::Storage::FileSystem::{
CreateFileW, FILE_GENERIC_READ, FILE_GENERIC_WRITE, OPEN_EXISTING,
};
use windows_sys::Win32::System::JobObjects::AssignProcessToJobObject;
use windows_sys::Win32::System::JobObjects::CreateJobObjectW;
use windows_sys::Win32::System::JobObjects::JobObjectExtendedLimitInformation;
use windows_sys::Win32::System::JobObjects::SetInformationJobObject;
use windows_sys::Win32::System::JobObjects::JOBOBJECT_EXTENDED_LIMIT_INFORMATION;
use windows_sys::Win32::System::JobObjects::JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
use windows_sys::Win32::System::Threading::WaitForSingleObject;
use windows_sys::Win32::System::Threading::INFINITE;
#[derive(Debug, Deserialize)]
struct RunnerRequest {
policy_json_or_preset: String,
#[allow(dead_code)]
sandbox_policy_cwd: PathBuf,
codex_home: PathBuf,
command: Vec<String>,
cwd: PathBuf,
env_map: HashMap<String, String>,
timeout_ms: Option<u64>,
stdin_pipe: String,
stdout_pipe: String,
stderr_pipe: String,
}
// Best-effort early marker to detect image load before main.
#[used]
#[allow(dead_code)]
static LOAD_MARKER: fn() = load_marker;
#[allow(dead_code)]
const fn load_marker() {
// const fn placeholder; actual work is in write_load_marker, invoked at start of main.
}
fn write_load_marker() {
if let Some(mut p) = dirs_next::home_dir() {
p.push(".codex");
let _ = std::fs::create_dir_all(&p);
p.push("runner_load_marker.txt");
let _ = std::fs::write(&p, "loaded");
}
}
unsafe fn create_job_kill_on_close() -> Result<HANDLE> {
let h = CreateJobObjectW(std::ptr::null_mut(), std::ptr::null());
if h == 0 {
return Err(anyhow::anyhow!("CreateJobObjectW failed"));
}
let mut limits: JOBOBJECT_EXTENDED_LIMIT_INFORMATION = std::mem::zeroed();
limits.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
let ok = SetInformationJobObject(
h,
JobObjectExtendedLimitInformation,
&mut limits as *mut _ as *mut _,
std::mem::size_of::<JOBOBJECT_EXTENDED_LIMIT_INFORMATION>() as u32,
);
if ok == 0 {
return Err(anyhow::anyhow!("SetInformationJobObject failed"));
}
Ok(h)
}
fn main() -> Result<()> {
write_load_marker();
let mut input = String::new();
std::io::stdin()
.read_to_string(&mut input)
.context("read request")?;
let req: RunnerRequest =
serde_json::from_str(&input).context("parse runner request json from stdin")?;
log_note(
&format!(
"runner start cwd={} cmd={:?}",
req.cwd.display(),
req.command
),
Some(&req.codex_home),
);
log_note(
&format!(
"stdin_pipe={} stdout_pipe={} stderr_pipe={}",
req.stdin_pipe, req.stdout_pipe, req.stderr_pipe
),
Some(&req.codex_home),
);
let policy = parse_policy(&req.policy_json_or_preset).context("parse policy_json_or_preset")?;
// Ensure cap SIDs exist.
let caps = load_or_create_cap_sids(&req.codex_home);
let cap_sid_path = cap_sid_file(&req.codex_home);
fs::write(&cap_sid_path, serde_json::to_string(&caps)?).context("write cap sid file")?;
let psid_cap: *mut c_void = match &policy {
SandboxPolicy::ReadOnly => unsafe { convert_string_sid_to_sid(&caps.readonly).unwrap() },
SandboxPolicy::WorkspaceWrite { .. } => unsafe {
convert_string_sid_to_sid(&caps.workspace).unwrap()
},
SandboxPolicy::DangerFullAccess => {
anyhow::bail!("DangerFullAccess is not supported for runner")
}
};
// Create restricted token from current process token.
let base = unsafe { get_current_token_for_restriction()? };
let token_res: Result<(HANDLE, *mut c_void)> = unsafe {
match &policy {
SandboxPolicy::ReadOnly => create_readonly_token_with_cap_from(base, psid_cap),
SandboxPolicy::WorkspaceWrite { .. } => {
create_workspace_write_token_with_cap_from(base, psid_cap)
}
SandboxPolicy::DangerFullAccess => unreachable!(),
}
};
let (h_token, psid_to_use) = token_res?;
unsafe {
CloseHandle(base);
}
unsafe {
allow_null_device(psid_to_use);
}
// Open named pipes for stdio.
let open_pipe = |name: &str, access: u32| -> Result<HANDLE> {
let path = to_wide(name);
let handle = unsafe {
CreateFileW(
path.as_ptr(),
access,
0,
std::ptr::null_mut(),
OPEN_EXISTING,
0,
0,
)
};
if handle == windows_sys::Win32::Foundation::INVALID_HANDLE_VALUE {
let err = unsafe { GetLastError() };
log_note(
&format!("CreateFileW failed for pipe {name}: {err}"),
Some(&req.codex_home),
);
return Err(anyhow::anyhow!("CreateFileW failed for pipe {name}: {err}"));
}
Ok(handle)
};
let h_stdin = open_pipe(&req.stdin_pipe, FILE_GENERIC_READ)?;
let h_stdout = open_pipe(&req.stdout_pipe, FILE_GENERIC_WRITE)?;
let h_stderr = open_pipe(&req.stderr_pipe, FILE_GENERIC_WRITE)?;
log_note("pipes opened", Some(&req.codex_home));
// Build command and env, spawn with CreateProcessWithTokenW.
let (proc_info, _si) = unsafe {
create_process_as_user(
h_token,
&req.command,
&req.cwd,
&req.env_map,
Some(&req.codex_home),
Some((h_stdin, h_stdout, h_stderr)),
)?
};
log_note("spawned child process", Some(&req.codex_home));
// Optional job kill on close.
let h_job = unsafe { create_job_kill_on_close().ok() };
if let Some(job) = h_job {
unsafe {
let _ = AssignProcessToJobObject(job, proc_info.hProcess);
}
}
// Wait for process.
let _ = unsafe {
WaitForSingleObject(
proc_info.hProcess,
req.timeout_ms.map(|ms| ms as u32).unwrap_or(INFINITE),
)
};
let mut exit_code: u32 = 1;
unsafe {
windows_sys::Win32::System::Threading::GetExitCodeProcess(
proc_info.hProcess,
&mut exit_code,
);
if proc_info.hThread != 0 {
CloseHandle(proc_info.hThread);
}
if proc_info.hProcess != 0 {
CloseHandle(proc_info.hProcess);
}
CloseHandle(h_token);
if let Some(job) = h_job {
CloseHandle(job);
}
}
log_note(
&format!("runner exiting with code {}", exit_code),
Some(&req.codex_home),
);
std::process::exit(exit_code as i32);
}

View File

@@ -0,0 +1,70 @@
use anyhow::Result;
use codex_windows_sandbox::to_wide;
use codex_windows_sandbox::{require_logon_sandbox_creds, SandboxPolicy};
use std::collections::HashMap;
use windows_sys::Win32::Foundation::GetLastError;
use windows_sys::Win32::System::Threading::CreateProcessWithLogonW;
use windows_sys::Win32::System::Threading::LOGON_WITH_PROFILE;
use windows_sys::Win32::System::Threading::{PROCESS_INFORMATION, STARTUPINFOW};
fn main() -> Result<()> {
let cwd = std::env::current_dir()?;
let codex_home = dirs_next::home_dir().unwrap_or(cwd.clone()).join(".codex");
let policy = SandboxPolicy::ReadOnly;
let _policy_json = serde_json::to_string(&policy)?;
let env_map: HashMap<String, String> = HashMap::new();
// Fetch sandbox creds (will prompt setup if missing).
let creds = require_logon_sandbox_creds(&policy, &cwd, &cwd, &env_map, &codex_home)?;
// Optional target override:
// - "stub" to launch runner-stub.exe
// - any other argument list is treated as the full command line to run.
let args: Vec<String> = std::env::args().skip(1).collect();
let target = args.first().cloned().unwrap_or_else(|| "cmd".to_string());
let mut si: STARTUPINFOW = unsafe { std::mem::zeroed() };
si.cb = std::mem::size_of::<STARTUPINFOW>() as u32;
let mut pi: PROCESS_INFORMATION = unsafe { std::mem::zeroed() };
let user_w = to_wide(&creds.username);
let domain_w = to_wide(".");
let password_w = to_wide(&creds.password);
let cmdline = if target == "stub" {
std::env::current_exe()
.ok()
.and_then(|p| p.parent().map(|d| d.join("runner-stub.exe")))
.and_then(|p| p.to_str().map(|s| s.to_string()))
.unwrap_or_else(|| "runner-stub.exe".to_string())
} else if !args.is_empty() {
args.join(" ")
} else {
"cmd /c whoami".to_string()
};
let cmd_w = to_wide(&cmdline);
let cwd_w = to_wide(&cwd);
let ok = unsafe {
CreateProcessWithLogonW(
user_w.as_ptr(),
domain_w.as_ptr(),
password_w.as_ptr(),
LOGON_WITH_PROFILE,
std::ptr::null(),
cmd_w.as_ptr() as *mut _,
0,
std::ptr::null(),
cwd_w.as_ptr(),
&si,
&mut pi,
)
};
if ok == 0 {
let err = unsafe { GetLastError() };
println!("CreateProcessWithLogonW failed: {}", err);
return Ok(());
}
println!(
"CreateProcessWithLogonW succeeded pid={} (target={})",
pi.dwProcessId, target
);
Ok(())
}

View File

@@ -0,0 +1,33 @@
use anyhow::Result;
use codex_windows_sandbox::{run_windows_sandbox_capture, SandboxPolicy};
use std::collections::HashMap;
fn main() -> Result<()> {
let cwd = std::env::current_dir()?;
let codex_home = dirs_next::home_dir().unwrap_or(cwd.clone()).join(".codex");
let policy = SandboxPolicy::ReadOnly;
let policy_json = serde_json::to_string(&policy)?;
let mut env_map = HashMap::new();
env_map.insert("SBX_DEBUG".to_string(), "1".to_string());
let res = run_windows_sandbox_capture(
&policy_json,
&cwd,
&codex_home,
vec![
"cmd".to_string(),
"/c".to_string(),
"echo smoke-runner".to_string(),
],
&cwd,
env_map,
Some(10_000),
)?;
println!("exit_code={}", res.exit_code);
println!("stdout={}", String::from_utf8_lossy(&res.stdout));
println!("stderr={}", String::from_utf8_lossy(&res.stderr));
println!("timed_out={}", res.timed_out);
Ok(())
}

View File

@@ -0,0 +1,105 @@
use anyhow::Result;
use codex_windows_sandbox::{
convert_string_sid_to_sid, create_readonly_token_with_cap_from,
get_current_token_for_restriction, load_or_create_cap_sids, to_wide,
};
use std::collections::HashMap;
use windows_sys::Win32::Foundation::{CloseHandle, GetLastError};
use windows_sys::Win32::System::Threading::{
CreateProcessAsUserW, WaitForSingleObject, CREATE_UNICODE_ENVIRONMENT, INFINITE,
PROCESS_INFORMATION, STARTUPINFOW,
};
fn main() -> Result<()> {
// Log current environment for diagnostics to a file under the sandbox user's profile.
let env_dump = std::env::vars()
.map(|(k, v)| format!("{k}={v}"))
.collect::<Vec<_>>()
.join("\n");
// Attempt multiple destinations; log errors to stderr.
if let Some(mut p) = dirs_next::home_dir() {
p.push(".codex");
if let Err(e) = std::fs::create_dir_all(&p) {
eprintln!("failed to create {:?}: {e}", p);
}
p.push("runner_stub_env.txt");
if let Err(e) = std::fs::write(&p, &env_dump) {
eprintln!("failed to write {:?}: {e}", p);
}
} else {
eprintln!("home_dir not available");
}
let public_path = std::path::Path::new(r"C:\Users\Public\runner_stub_env.txt");
if let Err(e) = std::fs::write(public_path, &env_dump) {
eprintln!("failed to write {:?}: {e}", public_path);
}
let cwd_path = std::env::current_dir()
.unwrap_or_else(|_| std::path::PathBuf::from("."))
.join("runner_stub_env.txt");
if let Err(e) = std::fs::write(&cwd_path, &env_dump) {
eprintln!("failed to write {:?}: {e}", cwd_path);
}
// Create restricted token with readonly capability.
let codex_home = dirs_next::home_dir()
.unwrap_or_else(std::env::temp_dir)
.join(".codex");
let caps = load_or_create_cap_sids(&codex_home);
let psid_cap = unsafe { convert_string_sid_to_sid(&caps.readonly).unwrap() };
let base = unsafe { get_current_token_for_restriction()? };
let (restricted, _psid_used) = unsafe { create_readonly_token_with_cap_from(base, psid_cap)? };
unsafe {
CloseHandle(base);
}
// Launch a trivial command with the restricted token.
let cmd = "cmd";
let args = "/c echo restricted-stub";
let mut si: STARTUPINFOW = unsafe { std::mem::zeroed() };
si.cb = std::mem::size_of::<STARTUPINFOW>() as u32;
let mut pi: PROCESS_INFORMATION = unsafe { std::mem::zeroed() };
let mut cmdline = to_wide(format!("{cmd} {args}"));
let cwd = std::env::current_dir()?;
let mut env_block: Vec<u16> = Vec::new();
let env_map: HashMap<String, String> = std::env::vars().collect();
for (k, v) in env_map {
let mut w = to_wide(format!("{k}={v}"));
w.pop();
env_block.extend_from_slice(&w);
env_block.push(0);
}
env_block.push(0);
let ok = unsafe {
CreateProcessAsUserW(
restricted,
std::ptr::null(),
cmdline.as_mut_ptr(),
std::ptr::null_mut(),
std::ptr::null_mut(),
0,
CREATE_UNICODE_ENVIRONMENT,
env_block.as_ptr() as *const _,
to_wide(&cwd).as_ptr(),
&mut si,
&mut pi,
)
};
if ok == 0 {
eprintln!("CreateProcessAsUserW failed: {}", unsafe { GetLastError() });
} else {
unsafe {
WaitForSingleObject(pi.hProcess, INFINITE);
if pi.hThread != 0 {
CloseHandle(pi.hThread);
}
if pi.hProcess != 0 {
CloseHandle(pi.hProcess);
}
}
}
unsafe {
CloseHandle(restricted);
}
Ok(())
}

View File

@@ -0,0 +1,784 @@
use anyhow::Context;
use anyhow::Result;
use base64::engine::general_purpose::STANDARD as BASE64;
use base64::Engine;
use codex_windows_sandbox::add_allow_ace;
use codex_windows_sandbox::dpapi_protect;
use codex_windows_sandbox::sandbox_dir;
use codex_windows_sandbox::string_from_sid_bytes;
use codex_windows_sandbox::SETUP_VERSION;
use rand::rngs::SmallRng;
use rand::RngCore;
use rand::SeedableRng;
use serde::Deserialize;
use serde::Serialize;
use std::ffi::c_void;
use std::ffi::OsStr;
use std::fs::File;
use std::io::Write;
use std::os::windows::ffi::OsStrExt;
use std::path::Path;
use std::path::PathBuf;
use std::sync::mpsc;
use std::time::Duration;
use windows::core::Interface;
use windows::core::BSTR;
use windows::Win32::Foundation::VARIANT_TRUE;
use windows::Win32::NetworkManagement::WindowsFirewall::INetFwPolicy2;
use windows::Win32::NetworkManagement::WindowsFirewall::INetFwRule3;
use windows::Win32::NetworkManagement::WindowsFirewall::NetFwPolicy2;
use windows::Win32::NetworkManagement::WindowsFirewall::NetFwRule;
use windows::Win32::NetworkManagement::WindowsFirewall::NET_FW_ACTION_BLOCK;
use windows::Win32::NetworkManagement::WindowsFirewall::NET_FW_IP_PROTOCOL_ANY;
use windows::Win32::NetworkManagement::WindowsFirewall::NET_FW_PROFILE2_ALL;
use windows::Win32::NetworkManagement::WindowsFirewall::NET_FW_RULE_DIR_OUT;
use windows::Win32::System::Com::CoCreateInstance;
use windows::Win32::System::Com::CoInitializeEx;
use windows::Win32::System::Com::CoUninitialize;
use windows::Win32::System::Com::CLSCTX_INPROC_SERVER;
use windows::Win32::System::Com::COINIT_APARTMENTTHREADED;
use windows_sys::Win32::Foundation::GetLastError;
use windows_sys::Win32::Foundation::LocalFree;
use windows_sys::Win32::Foundation::ERROR_INSUFFICIENT_BUFFER;
use windows_sys::Win32::Foundation::HLOCAL;
use windows_sys::Win32::NetworkManagement::NetManagement::NERR_Success;
use windows_sys::Win32::NetworkManagement::NetManagement::NetLocalGroupAddMembers;
use windows_sys::Win32::NetworkManagement::NetManagement::NetUserAdd;
use windows_sys::Win32::NetworkManagement::NetManagement::NetUserSetInfo;
use windows_sys::Win32::NetworkManagement::NetManagement::LOCALGROUP_MEMBERS_INFO_3;
use windows_sys::Win32::NetworkManagement::NetManagement::UF_DONT_EXPIRE_PASSWD;
use windows_sys::Win32::NetworkManagement::NetManagement::UF_SCRIPT;
use windows_sys::Win32::NetworkManagement::NetManagement::USER_INFO_1;
use windows_sys::Win32::NetworkManagement::NetManagement::USER_INFO_1003;
use windows_sys::Win32::NetworkManagement::NetManagement::USER_PRIV_USER;
use windows_sys::Win32::Security::Authorization::ConvertStringSidToSidW;
use windows_sys::Win32::Security::Authorization::GetEffectiveRightsFromAclW;
use windows_sys::Win32::Security::Authorization::GetNamedSecurityInfoW;
use windows_sys::Win32::Security::Authorization::SetEntriesInAclW;
use windows_sys::Win32::Security::Authorization::SetNamedSecurityInfoW;
use windows_sys::Win32::Security::Authorization::EXPLICIT_ACCESS_W;
use windows_sys::Win32::Security::Authorization::GRANT_ACCESS;
use windows_sys::Win32::Security::Authorization::SE_FILE_OBJECT;
use windows_sys::Win32::Security::Authorization::TRUSTEE_IS_SID;
use windows_sys::Win32::Security::Authorization::TRUSTEE_W;
use windows_sys::Win32::Security::LookupAccountNameW;
use windows_sys::Win32::Security::ACL;
use windows_sys::Win32::Security::CONTAINER_INHERIT_ACE;
use windows_sys::Win32::Security::DACL_SECURITY_INFORMATION;
use windows_sys::Win32::Security::OBJECT_INHERIT_ACE;
use windows_sys::Win32::Security::SID_NAME_USE;
use windows_sys::Win32::Storage::FileSystem::DELETE;
use windows_sys::Win32::Storage::FileSystem::FILE_GENERIC_EXECUTE;
use windows_sys::Win32::Storage::FileSystem::FILE_GENERIC_READ;
use windows_sys::Win32::Storage::FileSystem::FILE_GENERIC_WRITE;
#[derive(Debug, Deserialize)]
struct Payload {
version: u32,
offline_username: String,
online_username: String,
codex_home: PathBuf,
read_roots: Vec<PathBuf>,
write_roots: Vec<PathBuf>,
real_user: String,
}
#[derive(Serialize)]
struct SandboxUserRecord {
username: String,
password: String,
}
#[derive(Serialize)]
struct SandboxUsersFile {
version: u32,
offline: SandboxUserRecord,
online: SandboxUserRecord,
}
#[derive(Serialize)]
struct SetupMarker {
version: u32,
offline_username: String,
online_username: String,
created_at: String,
}
fn log_line(log: &mut File, msg: &str) -> Result<()> {
let ts = chrono::Utc::now().to_rfc3339();
writeln!(log, "[{ts}] {msg}")?;
Ok(())
}
fn to_wide(s: &OsStr) -> Vec<u16> {
let mut v: Vec<u16> = s.encode_wide().collect();
v.push(0);
v
}
fn random_password() -> String {
const CHARS: &[u8] =
b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*()-_=+";
let mut rng = SmallRng::from_entropy();
let mut buf = [0u8; 24];
rng.fill_bytes(&mut buf);
buf.iter()
.map(|b| {
let idx = (*b as usize) % CHARS.len();
CHARS[idx] as char
})
.collect()
}
fn sid_to_string(sid: &[u8]) -> Result<String> {
string_from_sid_bytes(sid).map_err(anyhow::Error::msg)
}
fn sid_bytes_to_psid(sid: &[u8]) -> Result<*mut c_void> {
let sid_str = sid_to_string(sid)?;
let sid_w = to_wide(OsStr::new(&sid_str));
let mut psid: *mut c_void = std::ptr::null_mut();
if unsafe { ConvertStringSidToSidW(sid_w.as_ptr(), &mut psid) } == 0 {
return Err(anyhow::anyhow!(
"ConvertStringSidToSidW failed: {}",
unsafe { GetLastError() }
));
}
Ok(psid)
}
fn ensure_local_user(name: &str, password: &str, log: &mut File) -> Result<()> {
let name_w = to_wide(OsStr::new(name));
let pwd_w = to_wide(OsStr::new(password));
unsafe {
let info = USER_INFO_1 {
usri1_name: name_w.as_ptr() as *mut u16,
usri1_password: pwd_w.as_ptr() as *mut u16,
usri1_password_age: 0,
usri1_priv: USER_PRIV_USER,
usri1_home_dir: std::ptr::null_mut(),
usri1_comment: std::ptr::null_mut(),
usri1_flags: UF_SCRIPT | UF_DONT_EXPIRE_PASSWD,
usri1_script_path: std::ptr::null_mut(),
};
let status = NetUserAdd(
std::ptr::null(),
1,
&info as *const _ as *mut u8,
std::ptr::null_mut(),
);
if status != NERR_Success {
// Try update password via level 1003.
let pw_info = USER_INFO_1003 {
usri1003_password: pwd_w.as_ptr() as *mut u16,
};
let upd = NetUserSetInfo(
std::ptr::null(),
name_w.as_ptr(),
1003,
&pw_info as *const _ as *mut u8,
std::ptr::null_mut(),
);
if upd != NERR_Success {
log_line(log, &format!("NetUserSetInfo failed for {name} code {upd}"))?;
return Err(anyhow::anyhow!(
"failed to create/update user {name}, code {status}/{upd}"
));
}
}
let group = to_wide(OsStr::new("Users"));
let member = LOCALGROUP_MEMBERS_INFO_3 {
lgrmi3_domainandname: name_w.as_ptr() as *mut u16,
};
let _ = NetLocalGroupAddMembers(
std::ptr::null(),
group.as_ptr(),
3,
&member as *const _ as *mut u8,
1,
);
}
Ok(())
}
fn resolve_sid(name: &str) -> Result<Vec<u8>> {
let name_w = to_wide(OsStr::new(name));
let mut sid_buffer = vec![0u8; 68];
let mut sid_len: u32 = sid_buffer.len() as u32;
let mut domain: Vec<u16> = Vec::new();
let mut domain_len: u32 = 0;
let mut use_type: SID_NAME_USE = 0;
loop {
let ok = unsafe {
LookupAccountNameW(
std::ptr::null(),
name_w.as_ptr(),
sid_buffer.as_mut_ptr() as *mut c_void,
&mut sid_len,
domain.as_mut_ptr(),
&mut domain_len,
&mut use_type,
)
};
if ok != 0 {
sid_buffer.truncate(sid_len as usize);
return Ok(sid_buffer);
}
let err = unsafe { GetLastError() };
if err == ERROR_INSUFFICIENT_BUFFER {
sid_buffer.resize(sid_len as usize, 0);
domain.resize(domain_len as usize, 0);
continue;
}
return Err(anyhow::anyhow!(
"LookupAccountNameW failed for {name}: {}",
err
));
}
}
fn trustee_has_rx(path: &Path, trustee: &str) -> Result<bool> {
let sid = resolve_sid(trustee)?;
unsafe {
let sid_str = sid_to_string(&sid)?;
let sid_w = to_wide(OsStr::new(&sid_str));
let mut psid: *mut c_void = std::ptr::null_mut();
if ConvertStringSidToSidW(sid_w.as_ptr(), &mut psid) == 0 {
return Err(anyhow::anyhow!(
"ConvertStringSidToSidW failed: {}",
GetLastError()
));
}
let path_w = to_wide(path.as_os_str());
let mut existing_dacl: *mut ACL = std::ptr::null_mut();
let mut sd: *mut c_void = std::ptr::null_mut();
let get_res = GetNamedSecurityInfoW(
path_w.as_ptr() as *mut u16,
SE_FILE_OBJECT,
DACL_SECURITY_INFORMATION,
std::ptr::null_mut(),
std::ptr::null_mut(),
&mut existing_dacl,
std::ptr::null_mut(),
&mut sd,
);
if get_res != 0 {
return Err(anyhow::anyhow!(
"GetNamedSecurityInfoW failed for {}: {}",
path.display(),
get_res
));
}
let trustee = TRUSTEE_W {
pMultipleTrustee: std::ptr::null_mut(),
MultipleTrusteeOperation: 0,
TrusteeForm: TRUSTEE_IS_SID,
TrusteeType: TRUSTEE_IS_SID,
ptstrName: psid as *mut u16,
};
let mut mask: u32 = 0;
let eff = GetEffectiveRightsFromAclW(existing_dacl, &trustee, &mut mask);
if eff != 0 {
return Err(anyhow::anyhow!(
"GetEffectiveRightsFromAclW failed for {}: {}",
path.display(),
eff
));
}
if !sd.is_null() {
LocalFree(sd as HLOCAL);
}
if !psid.is_null() {
LocalFree(psid as HLOCAL);
}
let needed = FILE_GENERIC_READ | FILE_GENERIC_EXECUTE;
Ok((mask & needed) == needed)
}
}
fn collect_system_roots() -> Vec<PathBuf> {
let mut roots = Vec::new();
if let Ok(sr) = std::env::var("SystemRoot") {
roots.push(PathBuf::from(sr));
} else {
roots.push(PathBuf::from(r"C:\Windows"));
}
if let Ok(pf) = std::env::var("ProgramFiles") {
roots.push(PathBuf::from(pf));
} else {
roots.push(PathBuf::from(r"C:\Program Files"));
}
if let Ok(pf86) = std::env::var("ProgramFiles(x86)") {
roots.push(PathBuf::from(pf86));
} else {
roots.push(PathBuf::from(r"C:\Program Files (x86)"));
}
if let Ok(pd) = std::env::var("ProgramData") {
roots.push(PathBuf::from(pd));
} else {
roots.push(PathBuf::from(r"C:\ProgramData"));
}
roots
}
fn add_inheritable_allow_no_log(path: &Path, sid: &[u8], mask: u32) -> Result<()> {
unsafe {
let mut psid: *mut c_void = std::ptr::null_mut();
let sid_str = sid_to_string(sid)?;
let sid_w = to_wide(OsStr::new(&sid_str));
if ConvertStringSidToSidW(sid_w.as_ptr(), &mut psid) == 0 {
return Err(anyhow::anyhow!(
"ConvertStringSidToSidW failed: {}",
GetLastError()
));
}
let path_w = to_wide(path.as_os_str());
let mut existing_dacl: *mut ACL = std::ptr::null_mut();
let mut sd: *mut c_void = std::ptr::null_mut();
let get_res = GetNamedSecurityInfoW(
path_w.as_ptr() as *mut u16,
SE_FILE_OBJECT,
DACL_SECURITY_INFORMATION,
std::ptr::null_mut(),
std::ptr::null_mut(),
&mut existing_dacl,
std::ptr::null_mut(),
&mut sd,
);
if get_res != 0 {
return Err(anyhow::anyhow!(
"GetNamedSecurityInfoW failed for {}: {}",
path.display(),
get_res
));
}
let trustee = TRUSTEE_W {
pMultipleTrustee: std::ptr::null_mut(),
MultipleTrusteeOperation: 0,
TrusteeForm: TRUSTEE_IS_SID,
TrusteeType: TRUSTEE_IS_SID,
ptstrName: psid as *mut u16,
};
let ea = EXPLICIT_ACCESS_W {
grfAccessPermissions: mask,
grfAccessMode: GRANT_ACCESS,
grfInheritance: OBJECT_INHERIT_ACE | CONTAINER_INHERIT_ACE,
Trustee: trustee,
};
let mut new_dacl: *mut ACL = std::ptr::null_mut();
let set = SetEntriesInAclW(1, &ea, existing_dacl, &mut new_dacl);
if set != 0 {
return Err(anyhow::anyhow!("SetEntriesInAclW failed: {}", set));
}
let res = SetNamedSecurityInfoW(
path_w.as_ptr() as *mut u16,
SE_FILE_OBJECT,
DACL_SECURITY_INFORMATION,
std::ptr::null_mut(),
std::ptr::null_mut(),
new_dacl,
std::ptr::null_mut(),
);
if res != 0 {
return Err(anyhow::anyhow!(
"SetNamedSecurityInfoW failed for {}: {}",
path.display(),
res
));
}
if !new_dacl.is_null() {
LocalFree(new_dacl as HLOCAL);
}
if !sd.is_null() {
LocalFree(sd as HLOCAL);
}
if !psid.is_null() {
LocalFree(psid as HLOCAL);
}
}
Ok(())
}
fn try_add_inheritable_allow_with_timeout(
path: &Path,
sid: &[u8],
mask: u32,
_log: &mut File,
timeout: Duration,
) -> Result<()> {
let (tx, rx) = mpsc::channel::<Result<()>>();
let path_buf = path.to_path_buf();
let sid_vec = sid.to_vec();
std::thread::spawn(move || {
let res = add_inheritable_allow_no_log(&path_buf, &sid_vec, mask);
let _ = tx.send(res);
});
match rx.recv_timeout(timeout) {
Ok(res) => res,
Err(mpsc::RecvTimeoutError::Timeout) => Err(anyhow::anyhow!(
"ACL grant timed out on {} after {:?}",
path.display(),
timeout
)),
Err(e) => Err(anyhow::anyhow!(
"ACL grant channel error on {}: {e}",
path.display()
)),
}
}
fn run_netsh_firewall(sid: &str, log: &mut File) -> Result<()> {
let local_user_spec = format!("O:LSD:(A;;CC;;;{sid})");
let hr = unsafe { CoInitializeEx(None, COINIT_APARTMENTTHREADED) };
if hr.is_err() {
return Err(anyhow::anyhow!("CoInitializeEx failed: {hr:?}"));
}
let result = unsafe {
(|| -> Result<()> {
let policy: INetFwPolicy2 = CoCreateInstance(&NetFwPolicy2, None, CLSCTX_INPROC_SERVER)
.map_err(|e| anyhow::anyhow!("CoCreateInstance NetFwPolicy2: {e:?}"))?;
let rules = policy
.Rules()
.map_err(|e| anyhow::anyhow!("INetFwPolicy2::Rules: {e:?}"))?;
let name = BSTR::from("Codex Sandbox Offline - Block Outbound");
let rule: INetFwRule3 = match rules.Item(&name) {
Ok(existing) => existing.cast().map_err(|e| {
anyhow::anyhow!("cast existing firewall rule to INetFwRule3: {e:?}")
})?,
Err(_) => {
let new_rule: INetFwRule3 =
CoCreateInstance(&NetFwRule, None, CLSCTX_INPROC_SERVER)
.map_err(|e| anyhow::anyhow!("CoCreateInstance NetFwRule: {e:?}"))?;
new_rule
.SetName(&name)
.map_err(|e| anyhow::anyhow!("SetName: {e:?}"))?;
new_rule
.SetDirection(NET_FW_RULE_DIR_OUT)
.map_err(|e| anyhow::anyhow!("SetDirection: {e:?}"))?;
new_rule
.SetAction(NET_FW_ACTION_BLOCK)
.map_err(|e| anyhow::anyhow!("SetAction: {e:?}"))?;
new_rule
.SetEnabled(VARIANT_TRUE)
.map_err(|e| anyhow::anyhow!("SetEnabled: {e:?}"))?;
new_rule
.SetProfiles(NET_FW_PROFILE2_ALL.0)
.map_err(|e| anyhow::anyhow!("SetProfiles: {e:?}"))?;
new_rule
.SetProtocol(NET_FW_IP_PROTOCOL_ANY.0)
.map_err(|e| anyhow::anyhow!("SetProtocol: {e:?}"))?;
rules
.Add(&new_rule)
.map_err(|e| anyhow::anyhow!("Rules::Add: {e:?}"))?;
new_rule
}
};
rule.SetLocalUserAuthorizedList(&BSTR::from(local_user_spec.as_str()))
.map_err(|e| anyhow::anyhow!("SetLocalUserAuthorizedList: {e:?}"))?;
rule.SetEnabled(VARIANT_TRUE)
.map_err(|e| anyhow::anyhow!("SetEnabled: {e:?}"))?;
rule.SetProfiles(NET_FW_PROFILE2_ALL.0)
.map_err(|e| anyhow::anyhow!("SetProfiles: {e:?}"))?;
rule.SetAction(NET_FW_ACTION_BLOCK)
.map_err(|e| anyhow::anyhow!("SetAction: {e:?}"))?;
rule.SetDirection(NET_FW_RULE_DIR_OUT)
.map_err(|e| anyhow::anyhow!("SetDirection: {e:?}"))?;
rule.SetProtocol(NET_FW_IP_PROTOCOL_ANY.0)
.map_err(|e| anyhow::anyhow!("SetProtocol: {e:?}"))?;
log_line(
log,
&format!(
"firewall rule configured via COM with LocalUserAuthorizedList={local_user_spec}"
),
)?;
Ok(())
})()
};
unsafe {
CoUninitialize();
}
result
}
fn lock_sandbox_dir(dir: &Path, real_user: &str, log: &mut File) -> Result<()> {
std::fs::create_dir_all(dir)?;
let system_sid = resolve_sid("SYSTEM")?;
let admins_sid = resolve_sid("Administrators")?;
let real_sid = resolve_sid(real_user)?;
let entries = [
(
system_sid,
FILE_GENERIC_READ | FILE_GENERIC_WRITE | FILE_GENERIC_EXECUTE | DELETE,
),
(
admins_sid,
FILE_GENERIC_READ | FILE_GENERIC_WRITE | FILE_GENERIC_EXECUTE | DELETE,
),
(
real_sid,
FILE_GENERIC_READ | FILE_GENERIC_WRITE | FILE_GENERIC_EXECUTE,
),
];
unsafe {
let mut eas: Vec<EXPLICIT_ACCESS_W> = Vec::new();
let mut sids: Vec<*mut c_void> = Vec::new();
for (sid_bytes, mask) in entries {
let sid_str = sid_to_string(&sid_bytes)?;
let sid_w = to_wide(OsStr::new(&sid_str));
let mut psid: *mut c_void = std::ptr::null_mut();
if ConvertStringSidToSidW(sid_w.as_ptr(), &mut psid) == 0 {
return Err(anyhow::anyhow!(
"ConvertStringSidToSidW failed: {}",
GetLastError()
));
}
sids.push(psid);
eas.push(EXPLICIT_ACCESS_W {
grfAccessPermissions: mask,
grfAccessMode: GRANT_ACCESS,
grfInheritance: OBJECT_INHERIT_ACE | CONTAINER_INHERIT_ACE,
Trustee: TRUSTEE_W {
pMultipleTrustee: std::ptr::null_mut(),
MultipleTrusteeOperation: 0,
TrusteeForm: TRUSTEE_IS_SID,
TrusteeType: TRUSTEE_IS_SID,
ptstrName: psid as *mut u16,
},
});
}
let mut new_dacl: *mut ACL = std::ptr::null_mut();
let set = SetEntriesInAclW(
eas.len() as u32,
eas.as_ptr(),
std::ptr::null_mut(),
&mut new_dacl,
);
if set != 0 {
return Err(anyhow::anyhow!(
"SetEntriesInAclW sandbox dir failed: {}",
set
));
}
let path_w = to_wide(dir.as_os_str());
let res = SetNamedSecurityInfoW(
path_w.as_ptr() as *mut u16,
SE_FILE_OBJECT,
DACL_SECURITY_INFORMATION,
std::ptr::null_mut(),
std::ptr::null_mut(),
new_dacl,
std::ptr::null_mut(),
);
if res != 0 {
return Err(anyhow::anyhow!(
"SetNamedSecurityInfoW sandbox dir failed: {}",
res
));
}
if !new_dacl.is_null() {
LocalFree(new_dacl as HLOCAL);
}
for sid in sids {
if !sid.is_null() {
LocalFree(sid as HLOCAL);
}
}
}
log_line(
log,
&format!("sandbox dir ACL applied at {}", dir.display()),
)?;
Ok(())
}
fn write_secrets(
codex_home: &Path,
offline_user: &str,
offline_pwd: &str,
online_user: &str,
online_pwd: &str,
) -> Result<()> {
let sandbox_dir = sandbox_dir(codex_home);
std::fs::create_dir_all(&sandbox_dir)?;
let offline_blob = dpapi_protect(offline_pwd.as_bytes())?;
let online_blob = dpapi_protect(online_pwd.as_bytes())?;
let users = SandboxUsersFile {
version: SETUP_VERSION,
offline: SandboxUserRecord {
username: offline_user.to_string(),
password: BASE64.encode(offline_blob),
},
online: SandboxUserRecord {
username: online_user.to_string(),
password: BASE64.encode(online_blob),
},
};
let marker = SetupMarker {
version: SETUP_VERSION,
offline_username: offline_user.to_string(),
online_username: online_user.to_string(),
created_at: chrono::Utc::now().to_rfc3339(),
};
let users_path = sandbox_dir.join("sandbox_users.json");
let marker_path = sandbox_dir.join("setup_marker.json");
std::fs::write(users_path, serde_json::to_vec_pretty(&users)?)?;
std::fs::write(marker_path, serde_json::to_vec_pretty(&marker)?)?;
Ok(())
}
fn main() -> Result<()> {
let mut args = std::env::args().collect::<Vec<_>>();
if args.len() != 2 {
anyhow::bail!("expected payload argument");
}
let payload_b64 = args.remove(1);
let payload_json = BASE64
.decode(payload_b64)
.context("failed to decode payload b64")?;
let payload: Payload =
serde_json::from_slice(&payload_json).context("failed to parse payload json")?;
if payload.version != SETUP_VERSION {
anyhow::bail!("setup version mismatch");
}
let log_path = payload.codex_home.join("codex_sbx_setup.log");
std::fs::create_dir_all(&payload.codex_home)?;
let mut log = File::options()
.create(true)
.append(true)
.open(&log_path)
.context("open log")?;
log_line(&mut log, "setup binary started")?;
let offline_pwd = random_password();
let online_pwd = random_password();
log_line(
&mut log,
&format!(
"ensuring sandbox users offline={} online={}",
payload.offline_username, payload.online_username
),
)?;
ensure_local_user(&payload.offline_username, &offline_pwd, &mut log)?;
ensure_local_user(&payload.online_username, &online_pwd, &mut log)?;
let offline_sid = resolve_sid(&payload.offline_username)?;
let online_sid = resolve_sid(&payload.online_username)?;
let offline_psid = sid_bytes_to_psid(&offline_sid)?;
let online_psid = sid_bytes_to_psid(&online_sid)?;
let system_roots = collect_system_roots();
let offline_sid_str = sid_to_string(&offline_sid)?;
log_line(
&mut log,
&format!(
"resolved SIDs offline={} online={}",
offline_sid_str,
sid_to_string(&online_sid)?
),
)?;
run_netsh_firewall(&offline_sid_str, &mut log)?;
for root in &payload.read_roots {
if !root.exists() {
continue;
}
let mut skipped = false;
for trustee in ["Users", "Authenticated Users", "Everyone"] {
if trustee_has_rx(root, trustee).unwrap_or(false) {
log_line(
&mut log,
&format!("{trustee} already has RX on {}; skipping", root.display()),
)?;
skipped = true;
break;
}
}
if skipped {
continue;
}
if system_roots.contains(root) {
log_line(
&mut log,
&format!(
"system root {} missing RX for Users/AU/Everyone; skipping to avoid hang",
root.display()
),
)?;
continue;
}
log_line(
&mut log,
&format!("granting read ACE to {} for sandbox users", root.display()),
)?;
let read_mask = FILE_GENERIC_READ | FILE_GENERIC_EXECUTE;
for (label, sid_bytes) in [("offline", &offline_sid), ("online", &online_sid)] {
match try_add_inheritable_allow_with_timeout(
root,
sid_bytes,
read_mask,
&mut log,
Duration::from_millis(25),
) {
Ok(_) => {}
Err(e) => {
log_line(
&mut log,
&format!(
"grant read ACE timed out/failed on {} for {label}: {e}",
root.display()
),
)?;
// Best-effort: skip to next root.
continue;
}
}
}
log_line(&mut log, &format!("granted read ACE to {}", root.display()))?;
}
for root in &payload.write_roots {
if !root.exists() {
continue;
}
log_line(
&mut log,
&format!("granting write ACE to {} for sandbox users", root.display()),
)?;
unsafe {
add_allow_ace(root, offline_psid)
.with_context(|| format!("failed to grant write ACE on {}", root.display()))?;
add_allow_ace(root, online_psid)
.with_context(|| format!("failed to grant write ACE on {}", root.display()))?;
}
log_line(
&mut log,
&format!("granted write ACE to {}", root.display()),
)?;
}
lock_sandbox_dir(
&sandbox_dir(&payload.codex_home),
&payload.real_user,
&mut log,
)?;
log_line(&mut log, "sandbox dir ACL applied")?;
write_secrets(
&payload.codex_home,
&payload.offline_username,
&offline_pwd,
&payload.online_username,
&online_pwd,
)?;
log_line(
&mut log,
"sandbox users and marker written (sandbox_users.json, setup_marker.json)",
)?;
unsafe {
if !offline_psid.is_null() {
LocalFree(offline_psid as HLOCAL);
}
if !online_psid.is_null() {
LocalFree(online_psid as HLOCAL);
}
}
log_line(&mut log, "setup binary completed")?;
Ok(())
}

View File

@@ -0,0 +1,81 @@
use anyhow::anyhow;
use anyhow::Result;
use windows_sys::Win32::Foundation::GetLastError;
use windows_sys::Win32::Foundation::HLOCAL;
use windows_sys::Win32::Foundation::LocalFree;
use windows_sys::Win32::Security::Cryptography::CryptProtectData;
use windows_sys::Win32::Security::Cryptography::CryptUnprotectData;
use windows_sys::Win32::Security::Cryptography::CRYPT_INTEGER_BLOB;
use windows_sys::Win32::Security::Cryptography::CRYPTPROTECT_UI_FORBIDDEN;
fn make_blob(data: &[u8]) -> CRYPT_INTEGER_BLOB {
CRYPT_INTEGER_BLOB {
cbData: data.len() as u32,
pbData: data.as_ptr() as *mut u8,
}
}
#[allow(clippy::unnecessary_mut_passed)]
pub fn protect(data: &[u8]) -> Result<Vec<u8>> {
let mut in_blob = make_blob(data);
let mut out_blob = CRYPT_INTEGER_BLOB {
cbData: 0,
pbData: std::ptr::null_mut(),
};
let ok = unsafe {
CryptProtectData(
&mut in_blob,
std::ptr::null(),
std::ptr::null(),
std::ptr::null_mut(),
std::ptr::null_mut(),
CRYPTPROTECT_UI_FORBIDDEN,
&mut out_blob,
)
};
if ok == 0 {
return Err(anyhow!("CryptProtectData failed: {}", unsafe { GetLastError() }));
}
let slice =
unsafe { std::slice::from_raw_parts(out_blob.pbData, out_blob.cbData as usize) }.to_vec();
unsafe {
if !out_blob.pbData.is_null() {
LocalFree(out_blob.pbData as HLOCAL);
}
}
Ok(slice)
}
#[allow(clippy::unnecessary_mut_passed)]
pub fn unprotect(blob: &[u8]) -> Result<Vec<u8>> {
let mut in_blob = make_blob(blob);
let mut out_blob = CRYPT_INTEGER_BLOB {
cbData: 0,
pbData: std::ptr::null_mut(),
};
let ok = unsafe {
CryptUnprotectData(
&mut in_blob,
std::ptr::null_mut(),
std::ptr::null(),
std::ptr::null_mut(),
std::ptr::null_mut(),
CRYPTPROTECT_UI_FORBIDDEN,
&mut out_blob,
)
};
if ok == 0 {
return Err(anyhow!(
"CryptUnprotectData failed: {}",
unsafe { GetLastError() }
));
}
let slice =
unsafe { std::slice::from_raw_parts(out_blob.pbData, out_blob.cbData as usize) }.to_vec();
unsafe {
if !out_blob.pbData.is_null() {
LocalFree(out_blob.pbData as HLOCAL);
}
}
Ok(slice)
}

View File

@@ -0,0 +1,136 @@
use crate::dpapi;
use crate::logging::debug_log;
use crate::policy::SandboxPolicy;
use crate::setup::run_elevated_setup;
use crate::setup::sandbox_users_path;
use crate::setup::setup_marker_path;
use crate::setup::SandboxUserRecord;
use crate::setup::SandboxUsersFile;
use crate::setup::SetupMarker;
use anyhow::anyhow;
use anyhow::Context;
use anyhow::Result;
use base64::engine::general_purpose::STANDARD as BASE64_STANDARD;
use base64::Engine;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
#[derive(Debug, Clone)]
struct SandboxIdentity {
username: String,
password: String,
#[allow(dead_code)]
offline: bool,
}
#[derive(Debug, Clone)]
pub struct SandboxCreds {
pub username: String,
pub password: String,
}
fn load_marker(codex_home: &Path) -> Result<Option<SetupMarker>> {
let path = setup_marker_path(codex_home);
let marker = match fs::read_to_string(&path) {
Ok(contents) => match serde_json::from_str::<SetupMarker>(&contents) {
Ok(m) => Some(m),
Err(err) => {
debug_log(
&format!("sandbox setup marker parse failed: {}", err),
Some(codex_home),
);
None
}
},
Err(err) if err.kind() == std::io::ErrorKind::NotFound => None,
Err(err) => {
debug_log(
&format!("sandbox setup marker read failed: {}", err),
Some(codex_home),
);
None
}
};
Ok(marker)
}
fn load_users(codex_home: &Path) -> Result<Option<SandboxUsersFile>> {
let path = sandbox_users_path(codex_home);
let file = match fs::read_to_string(&path) {
Ok(contents) => contents,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(None),
Err(err) => {
debug_log(
&format!("sandbox users read failed: {}", err),
Some(codex_home),
);
return Ok(None);
}
};
match serde_json::from_str::<SandboxUsersFile>(&file) {
Ok(users) => Ok(Some(users)),
Err(err) => {
debug_log(
&format!("sandbox users parse failed: {}", err),
Some(codex_home),
);
Ok(None)
}
}
}
fn decode_password(record: &SandboxUserRecord) -> Result<String> {
let blob = BASE64_STANDARD
.decode(record.password.as_bytes())
.context("base64 decode password")?;
let decrypted = dpapi::unprotect(&blob)?;
let pwd = String::from_utf8(decrypted).context("sandbox password not utf-8")?;
Ok(pwd)
}
fn select_identity(policy: &SandboxPolicy, codex_home: &Path) -> Result<Option<SandboxIdentity>> {
let _marker = match load_marker(codex_home)? {
Some(m) if m.version_matches() => m,
_ => return Ok(None),
};
let users = match load_users(codex_home)? {
Some(u) if u.version_matches() => u,
_ => return Ok(None),
};
let offline = !policy.has_full_network_access();
let chosen = if offline {
users.offline
} else {
users.online
};
let password = decode_password(&chosen)?;
Ok(Some(SandboxIdentity {
username: chosen.username.clone(),
password,
offline,
}))
}
pub fn require_logon_sandbox_creds(
policy: &SandboxPolicy,
policy_cwd: &Path,
command_cwd: &Path,
env_map: &HashMap<String, String>,
codex_home: &Path,
) -> Result<SandboxCreds> {
let mut identity = select_identity(policy, codex_home)?;
if identity.is_none() {
run_elevated_setup(policy, policy_cwd, command_cwd, env_map, codex_home)?;
identity = select_identity(policy, codex_home)?;
}
let identity = identity.ok_or_else(|| {
anyhow!(
"Windows sandbox setup is missing or out of date; rerun the sandbox setup with elevation"
)
})?;
Ok(SandboxCreds {
username: identity.username,
password: identity.password,
})
}

View File

@@ -4,18 +4,55 @@ macro_rules! windows_modules {
};
}
windows_modules!(acl, allow, audit, cap, env, logging, policy, token, winutil);
windows_modules!(
acl, allow, audit, cap, dpapi, env, identity, logging, policy, process, setup, token, winutil
);
#[cfg(target_os = "windows")]
pub use acl::{add_allow_ace, add_deny_write_ace, allow_null_device};
#[cfg(target_os = "windows")]
pub use allow::compute_allow_paths;
#[cfg(target_os = "windows")]
pub use audit::apply_world_writable_scan_and_denies;
#[cfg(target_os = "windows")]
pub use cap::{cap_sid_file, load_or_create_cap_sids};
#[cfg(target_os = "windows")]
pub use dpapi::protect as dpapi_protect;
#[cfg(target_os = "windows")]
pub use dpapi::unprotect as dpapi_unprotect;
#[cfg(target_os = "windows")]
pub use identity::require_logon_sandbox_creds;
#[cfg(target_os = "windows")]
pub use logging::log_note;
#[cfg(target_os = "windows")]
pub use policy::{parse_policy, SandboxPolicy};
#[cfg(target_os = "windows")]
pub use process::create_process_as_user;
#[cfg(target_os = "windows")]
pub use setup::run_elevated_setup;
#[cfg(target_os = "windows")]
pub use setup::sandbox_dir;
#[cfg(target_os = "windows")]
pub use setup::SETUP_VERSION;
#[cfg(target_os = "windows")]
pub use token::{
convert_string_sid_to_sid, create_readonly_token_with_cap_from,
create_workspace_write_token_with_cap_from, get_current_token_for_restriction,
};
#[cfg(target_os = "windows")]
pub use windows_impl::run_windows_sandbox_capture;
#[cfg(target_os = "windows")]
pub use windows_impl::CaptureResult;
#[cfg(target_os = "windows")]
pub use winutil::string_from_sid_bytes;
#[cfg(target_os = "windows")]
pub use winutil::to_wide;
#[cfg(not(target_os = "windows"))]
pub use stub::apply_world_writable_scan_and_denies;
#[cfg(not(target_os = "windows"))]
pub use stub::run_elevated_setup;
#[cfg(not(target_os = "windows"))]
pub use stub::run_windows_sandbox_capture;
#[cfg(not(target_os = "windows"))]
pub use stub::CaptureResult;
@@ -33,8 +70,10 @@ mod windows_impl {
use super::env::apply_no_network_to_env;
use super::env::ensure_non_interactive_pager;
use super::env::normalize_null_device_env;
use super::identity::require_logon_sandbox_creds;
use super::logging::debug_log;
use super::logging::log_failure;
use super::logging::log_note;
use super::logging::log_start;
use super::logging::log_success;
use super::policy::parse_policy;
@@ -43,29 +82,45 @@ mod windows_impl {
use super::winutil::format_last_error;
use super::winutil::to_wide;
use anyhow::Result;
use rand::rngs::SmallRng;
use rand::Rng;
use rand::SeedableRng;
use std::collections::HashMap;
use std::ffi::c_void;
use std::fs;
use std::io;
use std::os::windows::io::FromRawHandle;
use std::path::Path;
use std::path::PathBuf;
use std::ptr;
use windows_sys::Win32::Foundation::CloseHandle;
use windows_sys::Win32::Foundation::GetLastError;
use windows_sys::Win32::Foundation::SetHandleInformation;
use windows_sys::Win32::Foundation::HANDLE;
use windows_sys::Win32::Foundation::HANDLE_FLAG_INHERIT;
use windows_sys::Win32::System::Pipes::CreatePipe;
use windows_sys::Win32::System::Threading::CreateProcessAsUserW;
use windows_sys::Win32::System::Pipes::ConnectNamedPipe;
use windows_sys::Win32::System::Pipes::CreateNamedPipeW;
// PIPE_ACCESS_DUPLEX is 0x00000003; not exposed in windows-sys 0.52, so use the value directly.
const PIPE_ACCESS_DUPLEX: u32 = 0x0000_0003;
use windows_sys::Win32::Security::Authorization::ConvertStringSecurityDescriptorToSecurityDescriptorW;
use windows_sys::Win32::Security::LogonUserW;
use windows_sys::Win32::Security::LOGON32_LOGON_INTERACTIVE;
use windows_sys::Win32::Security::LOGON32_PROVIDER_DEFAULT;
use windows_sys::Win32::Security::{PSECURITY_DESCRIPTOR, SECURITY_ATTRIBUTES};
use windows_sys::Win32::System::Environment::CreateEnvironmentBlock;
use windows_sys::Win32::System::Environment::DestroyEnvironmentBlock;
use windows_sys::Win32::System::Pipes::PIPE_READMODE_BYTE;
use windows_sys::Win32::System::Pipes::PIPE_TYPE_BYTE;
use windows_sys::Win32::System::Pipes::PIPE_WAIT;
use windows_sys::Win32::System::Threading::CreateProcessWithLogonW;
use windows_sys::Win32::System::Threading::GetExitCodeProcess;
use windows_sys::Win32::System::Threading::WaitForSingleObject;
use windows_sys::Win32::System::Threading::CREATE_UNICODE_ENVIRONMENT;
use windows_sys::Win32::System::Threading::INFINITE;
use windows_sys::Win32::System::Threading::LOGON_WITH_PROFILE;
use windows_sys::Win32::System::Threading::PROCESS_INFORMATION;
use windows_sys::Win32::System::Threading::STARTF_USESTDHANDLES;
use windows_sys::Win32::System::Threading::STARTUPINFOW;
type PipeHandles = ((HANDLE, HANDLE), (HANDLE, HANDLE), (HANDLE, HANDLE));
use windows_sys::Win32::UI::Shell::LoadUserProfileA;
use windows_sys::Win32::UI::Shell::UnloadUserProfile;
use windows_sys::Win32::UI::Shell::PROFILEINFOA;
fn should_apply_network_block(policy: &SandboxPolicy) -> bool {
!policy.has_full_network_access()
@@ -83,6 +138,26 @@ mod windows_impl {
Ok(())
}
fn find_runner_exe() -> PathBuf {
if let Ok(exe) = std::env::current_exe() {
if let Some(dir) = exe.parent() {
let candidate = dir.join("codex-command-runner.exe");
if candidate.exists() {
return candidate;
}
let release_candidate = dir
.parent()
.map(|p| p.join("release").join("codex-command-runner.exe"));
if let Some(rel) = release_candidate {
if rel.exists() {
return rel;
}
}
}
}
PathBuf::from("codex-command-runner.exe")
}
fn make_env_block(env: &HashMap<String, String>) -> Vec<u16> {
let mut items: Vec<(String, String)> =
env.iter().map(|(k, v)| (k.clone(), v.clone())).collect();
@@ -143,32 +218,64 @@ mod windows_impl {
quoted
}
unsafe fn setup_stdio_pipes() -> io::Result<PipeHandles> {
let mut in_r: HANDLE = 0;
let mut in_w: HANDLE = 0;
let mut out_r: HANDLE = 0;
let mut out_w: HANDLE = 0;
let mut err_r: HANDLE = 0;
let mut err_w: HANDLE = 0;
if CreatePipe(&mut in_r, &mut in_w, ptr::null_mut(), 0) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
fn pipe_name(suffix: &str) -> String {
let mut rng = SmallRng::from_entropy();
format!(r"\\.\pipe\codex-runner-{:x}-{}", rng.gen::<u128>(), suffix)
}
fn create_named_pipe(name: &str, access: u32) -> io::Result<HANDLE> {
// Allow sandbox users to connect by granting Everyone full access on the pipe.
let sddl = to_wide("D:(A;;GA;;;WD)");
let mut sd: PSECURITY_DESCRIPTOR = ptr::null_mut();
let ok = unsafe {
ConvertStringSecurityDescriptorToSecurityDescriptorW(
sddl.as_ptr(),
1, // SDDL_REVISION_1
&mut sd,
ptr::null_mut(),
)
};
if ok == 0 {
return Err(io::Error::from_raw_os_error(unsafe {
GetLastError() as i32
}));
}
if CreatePipe(&mut out_r, &mut out_w, ptr::null_mut(), 0) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
let mut sa = SECURITY_ATTRIBUTES {
nLength: std::mem::size_of::<SECURITY_ATTRIBUTES>() as u32,
lpSecurityDescriptor: sd,
bInheritHandle: 0,
};
let wide = to_wide(name);
let h = unsafe {
CreateNamedPipeW(
wide.as_ptr(),
access,
PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
1,
65536,
65536,
0,
&mut sa as *mut SECURITY_ATTRIBUTES,
)
};
if h == 0 || h == windows_sys::Win32::Foundation::INVALID_HANDLE_VALUE {
return Err(io::Error::from_raw_os_error(unsafe {
GetLastError() as i32
}));
}
if CreatePipe(&mut err_r, &mut err_w, ptr::null_mut(), 0) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
Ok(h)
}
fn connect_pipe(h: HANDLE) -> io::Result<()> {
let ok = unsafe { ConnectNamedPipe(h, ptr::null_mut()) };
if ok == 0 {
let err = unsafe { GetLastError() };
const ERROR_PIPE_CONNECTED: u32 = 535;
if err != ERROR_PIPE_CONNECTED {
return Err(io::Error::from_raw_os_error(err as i32));
}
}
if SetHandleInformation(in_r, HANDLE_FLAG_INHERIT, HANDLE_FLAG_INHERIT) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
}
if SetHandleInformation(out_w, HANDLE_FLAG_INHERIT, HANDLE_FLAG_INHERIT) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
}
if SetHandleInformation(err_w, HANDLE_FLAG_INHERIT, HANDLE_FLAG_INHERIT) == 0 {
return Err(io::Error::from_raw_os_error(GetLastError() as i32));
}
Ok(((in_r, in_w), (out_r, out_w), (err_r, err_w)))
Ok(())
}
pub struct CaptureResult {
@@ -178,6 +285,20 @@ mod windows_impl {
pub timed_out: bool,
}
#[derive(serde::Serialize)]
struct RunnerPayload {
policy_json_or_preset: String,
sandbox_policy_cwd: PathBuf,
codex_home: PathBuf,
command: Vec<String>,
cwd: PathBuf,
env_map: HashMap<String, String>,
timeout_ms: Option<u64>,
stdin_pipe: String,
stdout_pipe: String,
stderr_pipe: String,
}
pub fn run_windows_sandbox_capture(
policy_json_or_preset: &str,
sandbox_policy_cwd: &Path,
@@ -201,99 +322,270 @@ mod windows_impl {
log_start(&command, logs_base_dir);
let cap_sid_path = cap_sid_file(codex_home);
let is_workspace_write = matches!(&policy, SandboxPolicy::WorkspaceWrite { .. });
let sandbox_creds =
require_logon_sandbox_creds(&policy, sandbox_policy_cwd, cwd, &env_map, codex_home)?;
let (h_token, psid_to_use): (HANDLE, *mut c_void) = unsafe {
match &policy {
SandboxPolicy::ReadOnly => {
let caps = load_or_create_cap_sids(codex_home);
ensure_dir(&cap_sid_path)?;
fs::write(&cap_sid_path, serde_json::to_string(&caps)?)?;
let psid = convert_string_sid_to_sid(&caps.readonly).unwrap();
super::token::create_readonly_token_with_cap(psid)?
}
SandboxPolicy::WorkspaceWrite { .. } => {
let caps = load_or_create_cap_sids(codex_home);
ensure_dir(&cap_sid_path)?;
fs::write(&cap_sid_path, serde_json::to_string(&caps)?)?;
let psid = convert_string_sid_to_sid(&caps.workspace).unwrap();
super::token::create_workspace_write_token_with_cap(psid)?
}
SandboxPolicy::DangerFullAccess => {
anyhow::bail!("DangerFullAccess is not supported for sandboxing")
}
// Build capability SID for ACL grants.
let psid_to_use = match &policy {
SandboxPolicy::ReadOnly => {
let caps = load_or_create_cap_sids(codex_home);
ensure_dir(&cap_sid_path)?;
fs::write(&cap_sid_path, serde_json::to_string(&caps)?)?;
unsafe { convert_string_sid_to_sid(&caps.readonly).unwrap() }
}
SandboxPolicy::WorkspaceWrite { .. } => {
let caps = load_or_create_cap_sids(codex_home);
ensure_dir(&cap_sid_path)?;
fs::write(&cap_sid_path, serde_json::to_string(&caps)?)?;
unsafe { convert_string_sid_to_sid(&caps.workspace).unwrap() }
}
SandboxPolicy::DangerFullAccess => {
anyhow::bail!("DangerFullAccess is not supported for sandboxing")
}
};
unsafe {
if is_workspace_write {
if let Ok(base) = super::token::get_current_token_for_restriction() {
if let Ok(bytes) = super::token::get_logon_sid_bytes(base) {
let mut tmp = bytes.clone();
let psid2 = tmp.as_mut_ptr() as *mut c_void;
allow_null_device(psid2);
}
windows_sys::Win32::Foundation::CloseHandle(base);
}
}
}
let persist_aces = is_workspace_write;
let AllowDenyPaths { allow, deny } =
compute_allow_paths(&policy, sandbox_policy_cwd, &current_dir, &env_map);
let mut guards: Vec<(PathBuf, *mut c_void)> = Vec::new();
unsafe {
for p in &allow {
if let Ok(added) = add_allow_ace(p, psid_to_use) {
if added {
if persist_aces {
if p.is_dir() {
// best-effort seeding omitted intentionally
}
} else {
guards.push((p.clone(), psid_to_use));
}
}
}
}
for p in &deny {
for p in &deny {
unsafe {
if let Ok(added) = add_deny_write_ace(p, psid_to_use) {
if added && !persist_aces {
guards.push((p.clone(), psid_to_use));
}
}
}
}
if is_workspace_write {
for p in &allow {
unsafe {
if let Ok(added) = add_allow_ace(p, psid_to_use) {
if added && !persist_aces {
guards.push((p.clone(), psid_to_use));
}
}
}
}
}
unsafe {
allow_null_device(psid_to_use);
}
let (stdin_pair, stdout_pair, stderr_pair) = unsafe { setup_stdio_pipes()? };
let ((in_r, in_w), (out_r, out_w), (err_r, err_w)) = (stdin_pair, stdout_pair, stderr_pair);
// Prepare named pipes for runner.
let stdin_name = pipe_name("stdin");
let stdout_name = pipe_name("stdout");
let stderr_name = pipe_name("stderr");
log_note(
&format!(
"preparing pipes stdin={} stdout={} stderr={}",
stdin_name, stdout_name, stderr_name
),
logs_base_dir,
);
let h_stdin_pipe = create_named_pipe(
&stdin_name,
PIPE_ACCESS_DUPLEX | PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
)?;
let h_stdout_pipe = create_named_pipe(
&stdout_name,
PIPE_ACCESS_DUPLEX | PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
)?;
let h_stderr_pipe = create_named_pipe(
&stderr_name,
PIPE_ACCESS_DUPLEX | PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
)?;
// Build runner payload.
let payload = RunnerPayload {
policy_json_or_preset: policy_json_or_preset.to_string(),
sandbox_policy_cwd: sandbox_policy_cwd.to_path_buf(),
codex_home: codex_home.to_path_buf(),
command: command.clone(),
cwd: cwd.to_path_buf(),
env_map: env_map.clone(),
timeout_ms,
stdin_pipe: stdin_name.clone(),
stdout_pipe: stdout_name.clone(),
stderr_pipe: stderr_name.clone(),
};
let payload_json = serde_json::to_string(&payload)?;
// Launch runner as sandbox user via CreateProcessWithLogonW.
let runner_exe = find_runner_exe();
let runner_cmdline = runner_exe
.to_str()
.map(|s| s.to_string())
.unwrap_or_else(|| "codex-command-runner.exe".to_string());
log_note(
&format!(
"launching runner exe={} as user={} cwd={}",
runner_cmdline,
sandbox_creds.username,
cwd.display()
),
logs_base_dir,
);
let cmdline_str = quote_windows_arg(&runner_cmdline);
let mut cmdline: Vec<u16> = to_wide(&cmdline_str);
fn build_sandbox_env_block(
username: &str,
password: &str,
logs_base_dir: Option<&Path>,
) -> Option<Vec<u16>> {
unsafe {
let user_w = to_wide(username);
let domain_w = to_wide(".");
let password_w = to_wide(password);
let mut h_tok: HANDLE = 0;
let ok = LogonUserW(
user_w.as_ptr(),
domain_w.as_ptr(),
password_w.as_ptr(),
LOGON32_LOGON_INTERACTIVE,
LOGON32_PROVIDER_DEFAULT,
&mut h_tok,
);
if ok == 0 || h_tok == 0 {
log_note(
&format!(
"build_sandbox_env_block: LogonUserW failed for {} err={}",
username,
GetLastError()
),
logs_base_dir,
);
return None;
}
let mut profile: PROFILEINFOA = std::mem::zeroed();
profile.dwSize = std::mem::size_of::<PROFILEINFOA>() as u32;
profile.lpUserName = user_w.as_ptr() as *mut _;
let profile_loaded = LoadUserProfileA(h_tok, &mut profile as *mut _);
if profile_loaded == 0 {
log_note(
&format!(
"build_sandbox_env_block: LoadUserProfile failed err={}",
GetLastError()
),
logs_base_dir,
);
}
let mut env_block_ptr: *mut std::ffi::c_void = std::ptr::null_mut();
let env_ok = CreateEnvironmentBlock(&mut env_block_ptr, h_tok, 0);
if env_ok == 0 || env_block_ptr.is_null() {
log_note(
&format!(
"build_sandbox_env_block: CreateEnvironmentBlock failed err={}",
GetLastError()
),
logs_base_dir,
);
if profile_loaded != 0 {
let _ = UnloadUserProfile(h_tok, profile.hProfile);
}
CloseHandle(h_tok);
return None;
}
// Convert env block to map for patch/logging.
let mut map = HashMap::new();
let mut ptr_u16 = env_block_ptr as *const u16;
loop {
// find len to null
let mut len = 0;
while *ptr_u16.add(len) != 0 {
len += 1;
}
if len == 0 {
break;
}
let slice = std::slice::from_raw_parts(ptr_u16, len);
if let Ok(s) = String::from_utf16(slice) {
if let Some((k, v)) = s.split_once('=') {
map.insert(k.to_string(), v.to_string());
}
}
ptr_u16 = ptr_u16.add(len + 1);
}
// Patch critical vars to the sandbox profile.
let profile_dir = format!(r"C:\Users\{}", username);
map.insert("USERPROFILE".to_string(), profile_dir.clone());
map.insert("HOMEDRIVE".to_string(), "C:".to_string());
map.insert("HOMEPATH".to_string(), format!(r"\Users\{}", username));
map.entry("SystemRoot".to_string())
.or_insert_with(|| "C:\\Windows".to_string());
map.entry("WINDIR".to_string())
.or_insert_with(|| "C:\\Windows".to_string());
let local_app = format!(r"{}\AppData\Local", profile_dir);
let appdata = format!(r"{}\AppData\Roaming", profile_dir);
map.insert("LOCALAPPDATA".to_string(), local_app.clone());
map.insert("APPDATA".to_string(), appdata);
let temp = format!(r"{}\Temp", local_app);
map.insert("TEMP".to_string(), temp.clone());
map.insert("TMP".to_string(), temp);
// Log env
let mut vars: Vec<String> =
map.iter().map(|(k, v)| format!("{}={}", k, v)).collect();
vars.sort();
log_note(
&format!(
"build_sandbox_env_block for {}:\n{}",
username,
vars.join("\n")
),
logs_base_dir,
);
// Rebuild env block
let env_block = make_env_block(&map);
DestroyEnvironmentBlock(env_block_ptr);
if profile_loaded != 0 {
let _ = UnloadUserProfile(h_tok, profile.hProfile);
}
CloseHandle(h_tok);
Some(env_block)
}
}
let env_block = build_sandbox_env_block(
&sandbox_creds.username,
&sandbox_creds.password,
logs_base_dir,
);
let env_log = if env_block.is_some() {
"runner env_block: custom sandbox profile env"
} else {
"runner env_block: inherit (sandbox user profile defaults)"
};
log_note(env_log, logs_base_dir);
let desktop = to_wide("Winsta0\\Default");
let mut si: STARTUPINFOW = unsafe { std::mem::zeroed() };
si.cb = std::mem::size_of::<STARTUPINFOW>() as u32;
si.dwFlags |= STARTF_USESTDHANDLES;
si.hStdInput = in_r;
si.hStdOutput = out_w;
si.hStdError = err_w;
let mut pi: PROCESS_INFORMATION = unsafe { std::mem::zeroed() };
let cmdline_str = command
.iter()
.map(|a| quote_windows_arg(a))
.collect::<Vec<_>>()
.join(" ");
let mut cmdline: Vec<u16> = to_wide(&cmdline_str);
let env_block = make_env_block(&env_map);
let desktop = to_wide("Winsta0\\Default");
si.lpDesktop = desktop.as_ptr() as *mut u16;
let mut pi: PROCESS_INFORMATION = unsafe { std::mem::zeroed() };
let user_w = to_wide(&sandbox_creds.username);
let domain_w = to_wide(".");
let password_w = to_wide(&sandbox_creds.password);
let spawn_res = unsafe {
CreateProcessAsUserW(
h_token,
CreateProcessWithLogonW(
user_w.as_ptr(),
domain_w.as_ptr(),
password_w.as_ptr(),
LOGON_WITH_PROFILE,
ptr::null(),
cmdline.as_mut_ptr(),
ptr::null_mut(),
ptr::null_mut(),
1,
CREATE_UNICODE_ENVIRONMENT,
env_block.as_ptr() as *mut c_void,
env_block
.as_ref()
.map(|b| b.as_ptr() as *const c_void)
.unwrap_or(ptr::null()),
to_wide(cwd).as_ptr(),
&si,
&mut pi,
@@ -302,35 +594,33 @@ mod windows_impl {
if spawn_res == 0 {
let err = unsafe { GetLastError() } as i32;
let dbg = format!(
"CreateProcessAsUserW failed: {} ({}) | cwd={} | cmd={} | env_u16_len={} | si_flags={}",
"CreateProcessWithLogonW failed: {} ({}) | cwd={} | cmd={} | env=inherit | si_flags={}",
err,
format_last_error(err),
cwd.display(),
cmdline_str,
env_block.len(),
si.dwFlags,
);
debug_log(&dbg, logs_base_dir);
unsafe {
CloseHandle(in_r);
CloseHandle(in_w);
CloseHandle(out_r);
CloseHandle(out_w);
CloseHandle(err_r);
CloseHandle(err_w);
CloseHandle(h_token);
}
return Err(anyhow::anyhow!("CreateProcessAsUserW failed: {}", err));
log_note(&dbg, logs_base_dir);
return Err(anyhow::anyhow!("CreateProcessWithLogonW failed: {}", err));
}
log_note("runner process launched", logs_base_dir);
// Connect pipes and send payload.
connect_pipe(h_stdin_pipe)?;
connect_pipe(h_stdout_pipe)?;
connect_pipe(h_stderr_pipe)?;
{
use std::io::Write;
let mut writer = unsafe { std::fs::File::from_raw_handle(h_stdin_pipe as _) };
writer.write_all(payload_json.as_bytes())?;
}
unsafe {
CloseHandle(in_r);
// Close the parent's stdin write end so the child sees EOF immediately.
CloseHandle(in_w);
CloseHandle(out_w);
CloseHandle(err_w);
CloseHandle(h_stdin_pipe);
}
// Read stdout/stderr.
let (tx_out, rx_out) = std::sync::mpsc::channel::<Vec<u8>>();
let (tx_err, rx_err) = std::sync::mpsc::channel::<Vec<u8>>();
let t_out = std::thread::spawn(move || {
@@ -340,7 +630,7 @@ mod windows_impl {
let mut read_bytes: u32 = 0;
let ok = unsafe {
windows_sys::Win32::Storage::FileSystem::ReadFile(
out_r,
h_stdout_pipe,
tmp.as_mut_ptr(),
tmp.len() as u32,
&mut read_bytes,
@@ -361,7 +651,7 @@ mod windows_impl {
let mut read_bytes: u32 = 0;
let ok = unsafe {
windows_sys::Win32::Storage::FileSystem::ReadFile(
err_r,
h_stderr_pipe,
tmp.as_mut_ptr(),
tmp.len() as u32,
&mut read_bytes,
@@ -389,6 +679,13 @@ mod windows_impl {
windows_sys::Win32::System::Threading::TerminateProcess(pi.hProcess, 1);
}
}
log_note(
&format!(
"runner exited timed_out={} code={}",
timed_out, exit_code_u32
),
logs_base_dir,
);
unsafe {
if pi.hThread != 0 {
@@ -397,7 +694,8 @@ mod windows_impl {
if pi.hProcess != 0 {
CloseHandle(pi.hProcess);
}
CloseHandle(h_token);
CloseHandle(h_stdout_pipe);
CloseHandle(h_stderr_pipe);
}
let _ = t_out.join();
let _ = t_err.join();
@@ -499,4 +797,14 @@ mod stub {
) -> Result<()> {
bail!("Windows sandbox is only available on Windows")
}
pub fn run_elevated_setup(
_policy: &SandboxPolicy,
_policy_cwd: &Path,
_command_cwd: &Path,
_env_map: &HashMap<String, String>,
_codex_home: &Path,
) -> Result<()> {
bail!("Windows sandbox is only available on Windows")
}
}

View File

@@ -21,11 +21,12 @@ use windows_sys::Win32::System::JobObjects::JobObjectExtendedLimitInformation;
use windows_sys::Win32::System::JobObjects::SetInformationJobObject;
use windows_sys::Win32::System::JobObjects::JOBOBJECT_EXTENDED_LIMIT_INFORMATION;
use windows_sys::Win32::System::JobObjects::JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE;
use windows_sys::Win32::System::Threading::CreateProcessAsUserW;
use windows_sys::Win32::System::Threading::GetExitCodeProcess;
use windows_sys::Win32::System::Threading::CreateProcessWithTokenW;
use windows_sys::Win32::System::Threading::WaitForSingleObject;
use windows_sys::Win32::System::Threading::CREATE_UNICODE_ENVIRONMENT;
use windows_sys::Win32::System::Threading::INFINITE;
use windows_sys::Win32::System::Threading::LOGON_WITH_PROFILE;
use windows_sys::Win32::System::Threading::PROCESS_INFORMATION;
use windows_sys::Win32::System::Threading::STARTF_USESTDHANDLES;
use windows_sys::Win32::System::Threading::STARTUPINFOW;
@@ -79,6 +80,7 @@ fn quote_arg(a: &str) -> String {
out.push('"');
out
}
#[allow(dead_code)]
unsafe fn ensure_inheritable_stdio(si: &mut STARTUPINFOW) -> Result<()> {
for kind in [STD_INPUT_HANDLE, STD_OUTPUT_HANDLE, STD_ERROR_HANDLE] {
let h = GetStdHandle(kind);
@@ -96,12 +98,16 @@ unsafe fn ensure_inheritable_stdio(si: &mut STARTUPINFOW) -> Result<()> {
Ok(())
}
/// # Safety
/// Caller must provide a valid primary token handle (`h_token`) with appropriate access,
/// and the `argv`, `cwd`, and `env_map` must remain valid for the duration of the call.
pub unsafe fn create_process_as_user(
h_token: HANDLE,
argv: &[String],
cwd: &Path,
env_map: &HashMap<String, String>,
logs_base_dir: Option<&Path>,
stdio: Option<(HANDLE, HANDLE, HANDLE)>,
) -> Result<(PROCESS_INFORMATION, STARTUPINFOW)> {
let cmdline_str = argv
.iter()
@@ -117,17 +123,22 @@ pub unsafe fn create_process_as_user(
// Point explicitly at the interactive desktop.
let desktop = to_wide("Winsta0\\Default");
si.lpDesktop = desktop.as_ptr() as *mut u16;
ensure_inheritable_stdio(&mut si)?;
if let Some((stdin_h, stdout_h, stderr_h)) = stdio {
si.dwFlags |= STARTF_USESTDHANDLES;
si.hStdInput = stdin_h;
si.hStdOutput = stdout_h;
si.hStdError = stderr_h;
} else {
ensure_inheritable_stdio(&mut si)?;
}
let mut pi: PROCESS_INFORMATION = std::mem::zeroed();
let ok = CreateProcessAsUserW(
let ok = CreateProcessWithTokenW(
h_token,
LOGON_WITH_PROFILE,
std::ptr::null(),
cmdline.as_mut_ptr(),
std::ptr::null_mut(),
std::ptr::null_mut(),
1,
CREATE_UNICODE_ENVIRONMENT,
env_block.as_ptr() as *mut c_void,
env_block.as_ptr() as *const c_void,
to_wide(cwd).as_ptr(),
&si,
&mut pi,
@@ -135,7 +146,7 @@ pub unsafe fn create_process_as_user(
if ok == 0 {
let err = GetLastError() as i32;
let msg = format!(
"CreateProcessAsUserW failed: {} ({}) | cwd={} | cmd={} | env_u16_len={} | si_flags={}",
"CreateProcessWithTokenW failed: {} ({}) | cwd={} | cmd={} | env_u16_len={} | si_flags={}",
err,
format_last_error(err),
cwd.display(),
@@ -144,11 +155,14 @@ pub unsafe fn create_process_as_user(
si.dwFlags,
);
logging::debug_log(&msg, logs_base_dir);
return Err(anyhow!("CreateProcessAsUserW failed: {}", err));
return Err(anyhow!("CreateProcessWithTokenW failed: {}", err));
}
Ok((pi, si))
}
/// # Safety
/// Caller must provide valid process information handles.
#[allow(dead_code)]
pub unsafe fn wait_process_and_exitcode(pi: &PROCESS_INFORMATION) -> Result<i32> {
let res = WaitForSingleObject(pi.hProcess, INFINITE);
if res != 0 {
@@ -161,6 +175,9 @@ pub unsafe fn wait_process_and_exitcode(pi: &PROCESS_INFORMATION) -> Result<i32>
Ok(code as i32)
}
/// # Safety
/// Caller must close the returned job handle.
#[allow(dead_code)]
pub unsafe fn create_job_kill_on_close() -> Result<HANDLE> {
let h = CreateJobObjectW(std::ptr::null_mut(), std::ptr::null());
if h == 0 {
@@ -183,6 +200,9 @@ pub unsafe fn create_job_kill_on_close() -> Result<HANDLE> {
Ok(h)
}
/// # Safety
/// Caller must pass valid handles for a job object and a process.
#[allow(dead_code)]
pub unsafe fn assign_to_job(h_job: HANDLE, h_process: HANDLE) -> Result<()> {
if AssignProcessToJobObject(h_job, h_process) == 0 {
return Err(anyhow!(

View File

@@ -0,0 +1,298 @@
use serde::Deserialize;
use serde::Serialize;
use std::collections::HashMap;
use std::ffi::c_void;
use std::path::Path;
use std::path::PathBuf;
use std::process::Command;
use anyhow::anyhow;
use anyhow::Context;
use anyhow::Result;
use base64::engine::general_purpose::STANDARD as BASE64_STANDARD;
use base64::Engine;
use crate::allow::compute_allow_paths;
use crate::allow::AllowDenyPaths;
use crate::policy::SandboxPolicy;
use windows_sys::Win32::Foundation::CloseHandle;
use windows_sys::Win32::Foundation::GetLastError;
use windows_sys::Win32::Security::AllocateAndInitializeSid;
use windows_sys::Win32::Security::CheckTokenMembership;
use windows_sys::Win32::Security::FreeSid;
use windows_sys::Win32::Security::SECURITY_NT_AUTHORITY;
pub const SETUP_VERSION: u32 = 1;
pub const OFFLINE_USERNAME: &str = "CodexSandboxOffline";
pub const ONLINE_USERNAME: &str = "CodexSandboxOnline";
const SECURITY_BUILTIN_DOMAIN_RID: u32 = 0x0000_0020;
const DOMAIN_ALIAS_RID_ADMINS: u32 = 0x0000_0220;
pub fn sandbox_dir(codex_home: &Path) -> PathBuf {
codex_home.join("sandbox")
}
pub fn setup_marker_path(codex_home: &Path) -> PathBuf {
sandbox_dir(codex_home).join("setup_marker.json")
}
pub fn sandbox_users_path(codex_home: &Path) -> PathBuf {
sandbox_dir(codex_home).join("sandbox_users.json")
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct SetupMarker {
pub version: u32,
pub offline_username: String,
pub online_username: String,
#[serde(default)]
pub created_at: Option<String>,
}
impl SetupMarker {
pub fn version_matches(&self) -> bool {
self.version == SETUP_VERSION
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct SandboxUserRecord {
pub username: String,
/// DPAPI-encrypted password blob, base64 encoded.
pub password: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct SandboxUsersFile {
pub version: u32,
pub offline: SandboxUserRecord,
pub online: SandboxUserRecord,
}
impl SandboxUsersFile {
pub fn version_matches(&self) -> bool {
self.version == SETUP_VERSION
}
}
fn is_elevated() -> Result<bool> {
unsafe {
let mut administrators_group: *mut c_void = std::ptr::null_mut();
let ok = AllocateAndInitializeSid(
&SECURITY_NT_AUTHORITY,
2,
SECURITY_BUILTIN_DOMAIN_RID,
DOMAIN_ALIAS_RID_ADMINS,
0,
0,
0,
0,
0,
0,
&mut administrators_group,
);
if ok == 0 {
return Err(anyhow!(
"AllocateAndInitializeSid failed: {}",
GetLastError()
));
}
let mut is_member = 0i32;
let check = CheckTokenMembership(0, administrators_group, &mut is_member as *mut _);
FreeSid(administrators_group as *mut _);
if check == 0 {
return Err(anyhow!("CheckTokenMembership failed: {}", GetLastError()));
}
Ok(is_member != 0)
}
}
fn canonical_existing(paths: &[PathBuf]) -> Vec<PathBuf> {
paths
.iter()
.filter_map(|p| {
if !p.exists() {
return None;
}
Some(dunce::canonicalize(p).unwrap_or_else(|_| p.clone()))
})
.collect()
}
fn gather_read_roots(
command_cwd: &Path,
policy: &SandboxPolicy,
policy_cwd: &Path,
) -> Vec<PathBuf> {
let mut roots: Vec<PathBuf> = Vec::new();
for p in [
PathBuf::from(r"C:\Windows"),
PathBuf::from(r"C:\Program Files"),
PathBuf::from(r"C:\Program Files (x86)"),
PathBuf::from(r"C:\ProgramData"),
] {
roots.push(p);
}
if let Ok(up) = std::env::var("USERPROFILE") {
roots.push(PathBuf::from(up));
}
roots.push(command_cwd.to_path_buf());
if let SandboxPolicy::WorkspaceWrite { writable_roots, .. } = policy {
for root in writable_roots {
let candidate = if root.is_absolute() {
root.clone()
} else {
policy_cwd.join(root)
};
roots.push(candidate);
}
}
canonical_existing(&roots)
}
fn gather_write_roots(
policy: &SandboxPolicy,
policy_cwd: &Path,
command_cwd: &Path,
env_map: &HashMap<String, String>,
) -> Vec<PathBuf> {
let AllowDenyPaths { allow, .. } =
compute_allow_paths(policy, policy_cwd, command_cwd, env_map);
canonical_existing(&allow.into_iter().collect::<Vec<_>>())
}
#[derive(Serialize)]
struct ElevationPayload {
version: u32,
offline_username: String,
online_username: String,
codex_home: PathBuf,
read_roots: Vec<PathBuf>,
write_roots: Vec<PathBuf>,
real_user: String,
}
fn quote_arg(arg: &str) -> String {
let needs = arg.is_empty()
|| arg
.chars()
.any(|c| matches!(c, ' ' | '\t' | '\n' | '\r' | '"'));
if !needs {
return arg.to_string();
}
let mut out = String::from("\"");
let mut bs = 0;
for ch in arg.chars() {
match ch {
'\\' => {
bs += 1;
}
'"' => {
out.push_str(&"\\".repeat(bs * 2 + 1));
out.push('"');
bs = 0;
}
_ => {
if bs > 0 {
out.push_str(&"\\".repeat(bs));
bs = 0;
}
out.push(ch);
}
}
}
if bs > 0 {
out.push_str(&"\\".repeat(bs * 2));
}
out.push('"');
out
}
fn find_setup_exe() -> PathBuf {
if let Ok(exe) = std::env::current_exe() {
if let Some(dir) = exe.parent() {
let candidate = dir.join("codex-windows-sandbox-setup.exe");
if candidate.exists() {
return candidate;
}
}
}
PathBuf::from("codex-windows-sandbox-setup.exe")
}
fn run_setup_exe(payload: &ElevationPayload, needs_elevation: bool) -> Result<()> {
use windows_sys::Win32::System::Threading::GetExitCodeProcess;
use windows_sys::Win32::System::Threading::WaitForSingleObject;
use windows_sys::Win32::System::Threading::INFINITE;
use windows_sys::Win32::UI::Shell::ShellExecuteExW;
use windows_sys::Win32::UI::Shell::SEE_MASK_NOCLOSEPROCESS;
use windows_sys::Win32::UI::Shell::SHELLEXECUTEINFOW;
let exe = find_setup_exe();
let payload_json = serde_json::to_string(payload)?;
let payload_b64 = BASE64_STANDARD.encode(payload_json.as_bytes());
if !needs_elevation {
let status = Command::new(&exe)
.arg(&payload_b64)
.status()
.context("failed to launch setup helper")?;
if !status.success() {
return Err(anyhow!(
"setup helper exited with status {:?}",
status.code()
));
}
return Ok(());
}
let exe_w = crate::winutil::to_wide(&exe);
let params = quote_arg(&payload_b64);
let params_w = crate::winutil::to_wide(params);
let verb_w = crate::winutil::to_wide("runas");
let mut sei: SHELLEXECUTEINFOW = unsafe { std::mem::zeroed() };
sei.cbSize = std::mem::size_of::<SHELLEXECUTEINFOW>() as u32;
sei.fMask = SEE_MASK_NOCLOSEPROCESS;
sei.lpVerb = verb_w.as_ptr();
sei.lpFile = exe_w.as_ptr();
sei.lpParameters = params_w.as_ptr();
// Default show window.
sei.nShow = 1;
let ok = unsafe { ShellExecuteExW(&mut sei) };
if ok == 0 || sei.hProcess == 0 {
return Err(anyhow!(
"ShellExecuteExW failed to launch setup helper: {}",
unsafe { GetLastError() }
));
}
unsafe {
WaitForSingleObject(sei.hProcess, INFINITE);
let mut code: u32 = 1;
GetExitCodeProcess(sei.hProcess, &mut code);
CloseHandle(sei.hProcess);
if code != 0 {
return Err(anyhow!("setup helper exited with status {}", code));
}
}
Ok(())
}
pub fn run_elevated_setup(
policy: &SandboxPolicy,
policy_cwd: &Path,
command_cwd: &Path,
env_map: &HashMap<String, String>,
codex_home: &Path,
) -> Result<()> {
let payload = ElevationPayload {
version: SETUP_VERSION,
offline_username: OFFLINE_USERNAME.to_string(),
online_username: ONLINE_USERNAME.to_string(),
codex_home: codex_home.to_path_buf(),
read_roots: gather_read_roots(command_cwd, policy, policy_cwd),
write_roots: gather_write_roots(policy, policy_cwd, command_cwd, env_map),
real_user: std::env::var("USERNAME").unwrap_or_else(|_| "Administrators".to_string()),
};
let needs_elevation = !is_elevated()?;
run_setup_exe(&payload, needs_elevation)
}

View File

@@ -24,6 +24,7 @@ use windows_sys::Win32::Security::TOKEN_DUPLICATE;
use windows_sys::Win32::Security::TOKEN_PRIVILEGES;
use windows_sys::Win32::Security::TOKEN_QUERY;
use windows_sys::Win32::System::Threading::GetCurrentProcess;
use windows_sys::Win32::System::Threading::OpenProcessToken;
const DISABLE_MAX_PRIVILEGE: u32 = 0x01;
const LUA_TOKEN: u32 = 0x04;
@@ -52,6 +53,8 @@ pub unsafe fn world_sid() -> Result<Vec<u8>> {
Ok(buf)
}
/// # Safety
/// Caller is responsible for freeing the returned SID with `LocalFree`.
pub unsafe fn convert_string_sid_to_sid(s: &str) -> Option<*mut c_void> {
#[link(name = "advapi32")]
extern "system" {
@@ -66,6 +69,9 @@ pub unsafe fn convert_string_sid_to_sid(s: &str) -> Option<*mut c_void> {
}
}
/// # Safety
/// Caller must close the returned token handle.
#[allow(dead_code)]
pub unsafe fn get_current_token_for_restriction() -> Result<HANDLE> {
let desired = TOKEN_DUPLICATE
| TOKEN_QUERY
@@ -197,13 +203,55 @@ unsafe fn enable_single_privilege(h_token: HANDLE, name: &str) -> Result<()> {
Ok(())
}
// removed unused create_write_restricted_token_strict
/// # Safety
/// Opens the current process token and adjusts privileges; caller should ensure this is needed in the current context.
#[allow(dead_code)]
pub unsafe fn enable_privilege_on_current(name: &str) -> Result<()> {
let mut h: HANDLE = 0;
let ok = OpenProcessToken(
GetCurrentProcess(),
TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY,
&mut h,
);
if ok == 0 {
return Err(anyhow!("OpenProcessToken failed: {}", GetLastError()));
}
let res = enable_single_privilege(h, name);
CloseHandle(h);
res
}
/// # Safety
/// Caller must close the returned token handle.
#[allow(dead_code)]
pub unsafe fn create_workspace_write_token_with_cap(
psid_capability: *mut c_void,
) -> Result<(HANDLE, *mut c_void)> {
let base = get_current_token_for_restriction()?;
let mut logon_sid_bytes = get_logon_sid_bytes(base)?;
let res = create_workspace_write_token_with_cap_from(base, psid_capability);
CloseHandle(base);
res
}
/// # Safety
/// Caller must close the returned token handle.
#[allow(dead_code)]
pub unsafe fn create_readonly_token_with_cap(
psid_capability: *mut c_void,
) -> Result<(HANDLE, *mut c_void)> {
let base = get_current_token_for_restriction()?;
let res = create_readonly_token_with_cap_from(base, psid_capability);
CloseHandle(base);
res
}
/// # Safety
/// Caller must close the returned token handle; base_token must be a valid primary token.
pub unsafe fn create_workspace_write_token_with_cap_from(
base_token: HANDLE,
psid_capability: *mut c_void,
) -> Result<(HANDLE, *mut c_void)> {
let mut logon_sid_bytes = get_logon_sid_bytes(base_token)?;
let psid_logon = logon_sid_bytes.as_mut_ptr() as *mut c_void;
let mut everyone = world_sid()?;
let psid_everyone = everyone.as_mut_ptr() as *mut c_void;
@@ -218,7 +266,7 @@ pub unsafe fn create_workspace_write_token_with_cap(
let mut new_token: HANDLE = 0;
let flags = DISABLE_MAX_PRIVILEGE | LUA_TOKEN | WRITE_RESTRICTED;
let ok = CreateRestrictedToken(
base,
base_token,
flags,
0,
std::ptr::null(),
@@ -235,11 +283,13 @@ pub unsafe fn create_workspace_write_token_with_cap(
Ok((new_token, psid_capability))
}
pub unsafe fn create_readonly_token_with_cap(
/// # Safety
/// Caller must close the returned token handle; base_token must be a valid primary token.
pub unsafe fn create_readonly_token_with_cap_from(
base_token: HANDLE,
psid_capability: *mut c_void,
) -> Result<(HANDLE, *mut c_void)> {
let base = get_current_token_for_restriction()?;
let mut logon_sid_bytes = get_logon_sid_bytes(base)?;
let mut logon_sid_bytes = get_logon_sid_bytes(base_token)?;
let psid_logon = logon_sid_bytes.as_mut_ptr() as *mut c_void;
let mut everyone = world_sid()?;
let psid_everyone = everyone.as_mut_ptr() as *mut c_void;
@@ -254,7 +304,7 @@ pub unsafe fn create_readonly_token_with_cap(
let mut new_token: HANDLE = 0;
let flags = DISABLE_MAX_PRIVILEGE | LUA_TOKEN | WRITE_RESTRICTED;
let ok = CreateRestrictedToken(
base,
base_token,
flags,
0,
std::ptr::null(),

View File

@@ -6,6 +6,7 @@ use windows_sys::Win32::System::Diagnostics::Debug::FormatMessageW;
use windows_sys::Win32::System::Diagnostics::Debug::FORMAT_MESSAGE_ALLOCATE_BUFFER;
use windows_sys::Win32::System::Diagnostics::Debug::FORMAT_MESSAGE_FROM_SYSTEM;
use windows_sys::Win32::System::Diagnostics::Debug::FORMAT_MESSAGE_IGNORE_INSERTS;
use windows_sys::Win32::Security::Authorization::ConvertSidToStringSidW;
pub fn to_wide<S: AsRef<OsStr>>(s: S) -> Vec<u16> {
let mut v: Vec<u16> = s.as_ref().encode_wide().collect();
@@ -41,3 +42,21 @@ pub fn format_last_error(err: i32) -> String {
s
}
}
pub fn string_from_sid_bytes(sid: &[u8]) -> Result<String, String> {
unsafe {
let mut str_ptr: *mut u16 = std::ptr::null_mut();
let ok = ConvertSidToStringSidW(sid.as_ptr() as *mut std::ffi::c_void, &mut str_ptr);
if ok == 0 || str_ptr.is_null() {
return Err(format!("ConvertSidToStringSidW failed: {}", std::io::Error::last_os_error()));
}
let mut len = 0;
while *str_ptr.add(len) != 0 {
len += 1;
}
let slice = std::slice::from_raw_parts(str_ptr, len);
let out = String::from_utf16_lossy(slice);
let _ = LocalFree(str_ptr as HLOCAL);
Ok(out)
}
}

View File

@@ -195,6 +195,7 @@ If the selected model is known to support reasoning (for example: `o3`, `o4-mini
- `"low"`
- `"medium"` (default)
- `"high"`
- `"xhigh"` (available only on `gpt-5.1-codex-max`)
Note: to minimize reasoning, choose `"minimal"`.

View File

@@ -37,7 +37,7 @@ model_provider = "openai"
# Reasoning & Verbosity (Responses API capable models)
################################################################################
# Reasoning effort: minimal | low | medium | high (default: medium)
# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; xhigh only on gpt-5.1-codex-max)
model_reasoning_effort = "medium"
# Reasoning summary: auto | concise | detailed | none (default: auto)

View File

@@ -8,7 +8,7 @@ In 2021, OpenAI released Codex, an AI system designed to generate code from natu
### Which models are supported?
We recommend using Codex with GPT-5.1 Codex, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
We recommend using Codex with GPT-5.1 Codex Max, our best coding model. The default reasoning level is medium, and you can upgrade to high or xhigh (Codex Max only) for complex tasks with the `/model` command.
You can also use older models by using API-based auth and launching codex with the `--model` flag.

62
docs/skills.md Normal file
View File

@@ -0,0 +1,62 @@
# Skills (experimental)
> **Warning:** This is an experimental and non-stable feature. If you depend on it, please expect breaking changes over the coming weeks and understand that there is currently no guarantee that this works well. Use at your own risk!
Codex can automatically discover reusable "skills" you keep on disk. A skill is a small bundle with a name, a short description (what it does and when to use it), and an optional body of instructions you can open when needed. Codex injects only the name, description, and file path into the runtime context; the body stays on disk.
## Where skills live
- Location (v1): `~/.codex/skills/**/SKILL.md` (recursive). Hidden entries and symlinks are skipped. Only files named exactly `SKILL.md` count.
- Sorting: rendered by name, then path for stability.
## File format
- YAML frontmatter + body.
- Required:
- `name` (non-empty, ≤100 chars, sanitized to one line)
- `description` (non-empty, ≤500 chars, sanitized to one line)
- Extra keys are ignored. The body can contain any Markdown; it is not injected into context.
## Loading and rendering
- Loaded once at startup.
- If valid skills exist, Codex appends a runtime-only `## Skills` section after `AGENTS.md`, one bullet per skill: `- <name>: <description> (file: /absolute/path/to/SKILL.md)`.
- If no valid skills exist, the section is omitted. On-disk files are never modified.
## Validation and errors
- Invalid skills (missing/invalid YAML, empty/over-length fields) trigger a blocking, dismissible startup modal in the TUI that lists each path and error. Errors are also logged. You can dismiss to continue (invalid skills are ignored) or exit. Fix SKILL.md files and restart to clear the modal.
## Create a skill
1. Create `~/.codex/skills/<skill-name>/`.
2. Add `SKILL.md`:
```
---
name: your-skill-name
description: what it does and when to use it (<=500 chars)
---
# Optional body
Add instructions, references, examples, or scripts (kept on disk).
```
3. Keep `name`/`description` within the limits; avoid newlines in those fields.
4. Restart Codex to load the new skill.
## Example
```
mkdir -p ~/.codex/skills/pdf-processing
cat <<'SKILL_EXAMPLE' > ~/.codex/skills/pdf-processing/SKILL.md
---
name: pdf-processing
description: Extract text and tables from PDFs; use when PDFs, forms, or document extraction are mentioned.
---
# PDF Processing
- Use pdfplumber to extract text.
- For form filling, see FORMS.md.
SKILL_EXAMPLE
```