Compare commits

..

1 Commits

Author SHA1 Message Date
alexsong-oai
81c4928825 ## New Features
- Added a built-in `request_permissions` tool so running turns can request additional permissions at runtime, with new TUI rendering for those approval calls. (#13092, #14004)
- Expanded plugin workflows with curated marketplace discovery, richer `plugin/list` metadata, install-time auth checks, and a `plugin/uninstall` endpoint. (#13712, #13540, #13685, #14111)
- Upgraded app-server command execution with streaming stdin/stdout/stderr plus TTY/PTY support, and wired `exec` to the new in-process app server path. (#13640, #14005)
- Web search settings now support full tool configuration (for example filters and location), not just on/off. (#13675)
- Added the new permission-profile config language and split filesystem/network sandbox policy plumbing for more precise policy control. (#13434, #13439, #13440, #13448, #13449, #13453)
- Image generation now saves output files into the current working directory. (#13607)

## Bug Fixes
- Fixed auth error handling for cloud requirements fetch so 401s trigger the normal auth-recovery messaging instead of a generic workspace-config failure. (#14049)
- Fixed trust bootstrap to avoid running `git` commands before project trust is established. (#13804)
- Fixed Windows execution edge cases, including incorrect PTY `TerminateProcess` success handling and stricter sandbox startup cwd validation. (#13989, #13833, #13742)
- Fixed plugin startup behavior so curated plugins are loaded in TUI sessions as expected. (#14050)
- Hardened network proxy policy parsing by rejecting global wildcard (`*`) domains while preserving scoped wildcard support. (#13789)
- Fixed approval payload compatibility for macOS automation permissions by accepting both supported input shapes. (#13683)

## Documentation
- Clarified `js_repl` guidance for persistent bindings and redeclaration recovery to reduce avoidable REPL errors. (#13803)

## Chores
- Reduced log/storage overhead by moving logs to a dedicated SQLite DB, adding timestamps to feedback logs, pruning old data, and tightening retention/row limits. (#13645, #13688, #13734, #13763, #13772, #13781)
- Improved Windows distribution automation by publishing CLI releases to winget. (#12943)

## Changelog

Full Changelog: https://github.com/openai/codex/compare/rust-v0.112.0...rust-v0.113.0

- #13626 feat(otel): safe tracing @owenlin0
- #13560 Refine realtime startup context formatting @aibrahim-oai
- #13615 Replay thread rollback from rollout history @aibrahim-oai
- #13642 fix(tui): clean up pending steer preview wrapping @charley-oai
- #13645 Add timestamped SQLite /feedback logs without schema changes @charley-oai
- #13654 tui: sort resume picker by last updated time @charley-oai
- #13540 support plugin/list. @xl-openai
- #13677 chore: remove unused legacy macOS permission types @celia-oai
- #13683 fix: accept two macOS automation input shapes  for approval payload compatibility @celia-oai
- #13687 refactor: remove proxy admin endpoint @viyatb-oai
- #13669 copy current exe to CODEX_HOME/.sandbox-bin for apply_patch @iceweasel-oai
- #11874 fix(tui) remove config check for trusted setting @dylan-hurd-oai
- #13685 check app auth in plugin/install @sayan-oai
- #13697 change sound @aibrahim-oai
- #13607 Enabling CWD Saving for Image-Gen @won-openai
- #13621 [elicitations] Switch to use MCP style elicitation payload for mcp tool approvals. @mzeng-openai
- #13619 feat: status line with real data @jif-oai
- #13734 feat: prune old memories in DB @jif-oai
- #13688 Add timestamps to feedback log lines @etraut-openai
- #13742 fix: windows normalization @jif-oai
- #13514 [rmcp-client] Recover from streamable HTTP 404 sessions @caseychow-oai
- #13750 feat: drop sqlite db feature flag @jif-oai
- #13753 feat: drop discrepency metrics @jif-oai
- #13763 feat: limit number of rows per log @jif-oai
- #13703 Clarify sandbox permission override helper semantics @charley-oai
- #13770 fix(app-server): fix turn_start_shell_zsh_fork_executes_command_v2 flake @owenlin0
- #13630 feat(otel, core): record turn TTFT and TTFM metrics in codex-core @owenlin0
- #13674 app-server: Emit `thread/name/updated` event globally @euroelessar
- #13772 Move sqlite logs to a dedicated database @charley-oai
- #13620 chore: improve DB flushing @jif-oai
- #13711 feat: structured plugin parsing @sayan-oai
- #13781 Reduce SQLite log retention to 10 days @charley-oai
- #13780 fix: move unit tests in codex-rs/core/src/config/mod.rs into their own file @bolinfest
- #13783 fix: move unit tests in codex-rs/core/src/codex.rs into their own file @bolinfest
- #13787 fix bazel build @bolinfest
- #13789 fix: reject global wildcard network proxy domains @viyatb-oai
- #12943 Codex/winget auto update @iceweasel-oai
- #13800 chore(otel): reorganize codex-otel crate @owenlin0
- #13797 feat: add auth login diagnostics @joshka-oai
- #13695 utils/pty: add streaming spawn and terminal sizing primitives @euroelessar
- #13803 Clarify js_repl binding reuse guidance @fjord-oai
- #13810 docs: remove auth login logging plan @joshka-oai
- #13434 config: add initial support for the new permission profile config language in config.toml @bolinfest
- #13796 Add realtime startup context override @aibrahim-oai
- #13814 fix: include libcap-dev dependency when creating a devcontainer for building Codex @bolinfest
- #13808 chore(otel): rename OtelManager to SessionTelemetry @owenlin0
- #13712 feat: Add curated plugin marketplace + Metadata Cleanup. @xl-openai
- #13791 fix(core): skip exec approval for permissionless skill scripts @celia-oai
- #13675 Allow full web search tool config @rm-openai
- #13640 app-server: Add streaming and tty/pty capabilities to `command/exec` @euroelessar
- #13819 feat(app-server-protocol): address naming conflicts in json schema exporter @owenlin0
- #13804 fix: avoid invoking git before project trust is established @viyatb-oai
- #12752 fix: support managed network allowlist controls @viyatb-oai
- #13439 sandboxing: plumb split sandbox policies through runtime @bolinfest
- #13440 protocol: derive effective file access from filesystem policies @bolinfest
- #13816 fix(core): respect reject policy by approval source for skill scripts @celia-oai
- #13833 app-server: require absolute cwd for windowsSandbox/setupStart @iceweasel-oai
- #13670 Add Fast mode status-line indicator @etraut-openai
- #13445 safety: honor filesystem policy carveouts in apply_patch @bolinfest
- #13771 feat: simplify DB further @jif-oai
- #13692 Add guardian approval MVP @charley-oai
- #13851 tmp: drop artifact skills @jif-oai
- #13910 fix(core) rm guardian snapshot test @dylan-hurd-oai
- #13911 fix(ci) fix guardian ci @dylan-hurd-oai
- #13896 Fix TUI context window display before first TokenCount @etraut-openai
- #13448 seatbelt: honor split filesystem sandbox policies @bolinfest
- #13921 chore: use @plugin instead of $plugin for plaintext mentions @sayan-oai
- #13807 [elicitations] Support always allow option for mcp tool calls. @mzeng-openai
- #13449 linux-sandbox: plumb split sandbox policies through helper @bolinfest
- #13451 sandboxing: preserve denied paths when widening permissions @bolinfest
- #13452 protocol: keep root carveouts sandboxed @bolinfest
- #13874 Stabilize abort task follow-up handling @aibrahim-oai
- #13453 linux-sandbox: honor split filesystem policies in bwrap @bolinfest
- #13989 Fix inverted Windows PTY `TerminateProcess` handling @etraut-openai
- #13912 fix(ci): restore guardian coverage and bazel unit tests @charley-oai
- #13877 Stabilize shell serialization tests @aibrahim-oai
- #13839 [app-server] Support hot-reload user config when batch writing config. @mzeng-openai
- #14005 Add in-process app server and wire up exec to use it @etraut-openai
- #13929 app-server: include experimental skill metadata in exec approval requests @celia-oai
- #14014 fix(core) patch otel test @dylan-hurd-oai
- #13841 tui: clarify pending steer follow-ups @charley-oai
- #13092 Add request permissions tool @mousseau-oai
- #14027 fix(bazel) add missing app-server-client BUILD.bazel @dylan-hurd-oai
- #14004 feat(tui) render request_permissions calls @dylan-hurd-oai
- #14052 Stabilize app list update ordering test @aibrahim-oai
- #13897 guardian initial feedback / tweaks @charley-oai
- #13884 Reduce app-server test timeout pressure @aibrahim-oai
- #13872 Stabilize zsh fork app-server tests @aibrahim-oai
- #13881 Stabilize RMCP pid file cleanup test @aibrahim-oai
- #13883 Stabilize PTY Python REPL test @aibrahim-oai
- #14058 Stabilize plan item app-server tests @aibrahim-oai
- #13943 Order websocket initialize after handshake @aibrahim-oai
- #13885 Stabilize thread resume replay tests @aibrahim-oai
- #13878 Serialize shell snapshot stdin test @aibrahim-oai
- #13876 Stabilize realtime startup context tests @aibrahim-oai
- #14050 fix(plugin): Also load curated plugins for TUI. @xl-openai
- #14049 fix: properly handle 401 error in clound requirement fetch. @xl-openai
- #14101 Stabilize shell approval MCP test @aibrahim-oai
- #14102 Stabilize interrupted task approval cleanup @aibrahim-oai
- #14103 Stabilize guardian approval coverage @aibrahim-oai
- #14060 Stabilize resumed rollout messages @aibrahim-oai
- #14114 fix(ci) Faster shell_command::unicode_output test @dylan-hurd-oai
- #14111 chore: plugin/uninstall endpoint @sayan-oai
- #14117 feat(otel): Centralize OTEL metric names and shared tag builders @owenlin0
- #13880 Stabilize RMCP streamable HTTP readiness tests @aibrahim-oai
- #14123 pass on save info to model + ui tweaks @won-openai
- #13886 Stabilize protocol schema fixture generation @aibrahim-oai
2026-03-09 21:17:46 -07:00
1914 changed files with 117278 additions and 254361 deletions

103
.bazelrc
View File

@@ -20,6 +20,9 @@ common:windows --host_platform=//:local_windows
common --@rules_cc//cc/toolchains/args/archiver_flags:use_libtool_on_macos=False
common --@llvm//config:experimental_stub_libgcc_s
# We need to use the sh toolchain on windows so we don't send host bash paths to the linux executor.
common:windows --@rules_rust//rust/settings:experimental_use_sh_toolchain_for_bootstrap_process_wrapper
# TODO(zbarsky): rules_rust doesn't implement this flag properly with remote exec...
# common --@rules_rust//rust/settings:pipelined_compilation
@@ -53,103 +56,3 @@ common --jobs=30
common:remote --extra_execution_platforms=//:rbe
common:remote --remote_executor=grpcs://remote.buildbuddy.io
common:remote --jobs=800
# TODO(team): Evaluate if this actually helps, zbarsky is not sure, everything seems bottlenecked on `core` either way.
# Enable pipelined compilation since we are not bound by local CPU count.
#common:remote --@rules_rust//rust/settings:pipelined_compilation
# GitHub Actions CI configs.
common:ci --remote_download_minimal
common:ci --keep_going
common:ci --verbose_failures
common:ci --build_metadata=REPO_URL=https://github.com/openai/codex.git
common:ci --build_metadata=ROLE=CI
common:ci --build_metadata=VISIBILITY=PUBLIC
# Disable disk cache in CI since we have a remote one and aren't using persistent workers.
common:ci --disk_cache=
# Shared config for the main Bazel CI workflow.
common:ci-bazel --config=ci
common:ci-bazel --build_metadata=TAG_workflow=bazel
# Shared config for Bazel-backed Rust linting.
build:clippy --aspects=@rules_rust//rust:defs.bzl%rust_clippy_aspect
build:clippy --output_groups=+clippy_checks
build:clippy --@rules_rust//rust/settings:clippy.toml=//codex-rs:clippy.toml
# Keep this deny-list in sync with `codex-rs/Cargo.toml` `[workspace.lints.clippy]`.
# Cargo applies those lint levels to member crates that opt into `[lints] workspace = true`
# in their own `Cargo.toml`, but `rules_rust` Bazel clippy does not read Cargo lint levels.
# `clippy.toml` can configure lint behavior, but it cannot set allow/warn/deny/forbid levels.
build:clippy --@rules_rust//rust/settings:clippy_flag=-Dwarnings
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::expect_used
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::identity_op
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_clamp
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_filter
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_find
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_flatten
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_map
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_memcpy
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_non_exhaustive
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_ok_or
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_range_contains
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_retain
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_strip
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_try_fold
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::manual_unwrap_or
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_borrow
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_borrowed_reference
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_collect
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_late_init
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_option_as_deref
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_question_mark
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::needless_update
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_clone
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_closure
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_closure_for_method_calls
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::redundant_static_lifetimes
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::trivially_copy_pass_by_ref
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::uninlined_format_args
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_filter_map
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_lazy_evaluations
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_sort_by
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unnecessary_to_owned
build:clippy --@rules_rust//rust/settings:clippy_flag=--deny=clippy::unwrap_used
# Shared config for Bazel-backed argument-comment-lint.
build:argument-comment-lint --aspects=//tools/argument-comment-lint:lint_aspect.bzl%rust_argument_comment_lint_aspect
build:argument-comment-lint --output_groups=argument_comment_lint_checks
build:argument-comment-lint --@rules_rust//rust/toolchain/channel=nightly
# Rearrange caches on Windows so they're on the same volume as the checkout.
common:ci-windows --config=ci-bazel
common:ci-windows --build_metadata=TAG_os=windows
common:ci-windows --repo_contents_cache=D:/a/.cache/bazel-repo-contents-cache
common:ci-windows --repository_cache=D:/a/.cache/bazel-repo-cache
# We prefer to run the build actions entirely remotely so we can dial up the concurrency.
# We have platform-specific tests, so we want to execute the tests on all platforms using the strongest sandboxing available on each platform.
# On linux, we can do a full remote build/test, by targeting the right (x86/arm) runners, so we have coverage of both.
# Linux crossbuilds don't work until we untangle the libc constraint mess.
common:ci-linux --config=ci-bazel
common:ci-linux --build_metadata=TAG_os=linux
common:ci-linux --config=remote
common:ci-linux --strategy=remote
common:ci-linux --platforms=//:rbe
# On mac, we can run all the build actions remotely but test actions locally.
common:ci-macos --config=ci-bazel
common:ci-macos --build_metadata=TAG_os=macos
common:ci-macos --config=remote
common:ci-macos --strategy=remote
common:ci-macos --strategy=TestRunner=darwin-sandbox,local
# Linux-only V8 CI config.
common:ci-v8 --config=ci
common:ci-v8 --build_metadata=TAG_workflow=v8
common:ci-v8 --build_metadata=TAG_os=linux
common:ci-v8 --config=remote
common:ci-v8 --strategy=remote
# Optional per-user local overrides.
try-import %workspace%/user.bazelrc

View File

@@ -3,4 +3,4 @@
skip = .git*,vendor,*-lock.yaml,*.lock,.codespellrc,*test.ts,*.jsonl,frame*.txt,*.snap,*.snap.new,*meriyah.umd.min.js
check-hidden = true
ignore-regex = ^\s*"image/\S+": ".*|\b(afterAll)\b
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm,te,TE,PASE,SEH
ignore-words-list = ratatui,ser,iTerm,iterm2,iterm,te,TE

View File

@@ -1,6 +1,6 @@
---
name: babysit-pr
description: Babysit a GitHub pull request after creation by continuously polling review comments, CI checks/workflow runs, and mergeability state until the PR is merged/closed or user help is required. Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and keep watching open PRs so fresh review feedback is surfaced promptly. Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
description: Babysit a GitHub pull request after creation by continuously polling CI checks/workflow runs, new review comments, and mergeability state until the PR is ready to merge (or merged/closed). Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and stop only when user help is required (for example CI infrastructure issues, exhausted flaky retries, or ambiguous/blocking situations). Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
---
# PR Babysitter
@@ -9,8 +9,8 @@ description: Babysit a GitHub pull request after creation by continuously pollin
Babysit a PR persistently until one of these terminal outcomes occurs:
- The PR is merged or closed.
- CI is successful, there are no unaddressed review comments surfaced by the watcher, required review approval is not blocking merge, and there are no potential merge conflicts (PR is mergeable / not reporting conflict risk).
- A situation requires user help (for example CI infrastructure issues, repeated flaky failures after retry budget is exhausted, permission problems, or ambiguity that cannot be resolved safely).
- Optional handoff milestone: the PR is currently green + mergeable + review-clean. Treat this as a progress state, not a watcher stop, so late-arriving review comments are still surfaced promptly while the PR remains open.
Do not stop merely because a single snapshot returns `idle` while checks are still pending.
@@ -24,20 +24,19 @@ Accept any of the following:
## Core Workflow
1. When the user asks to "monitor"/"watch"/"babysit" a PR, start with the watcher's continuous mode (`--watch`) unless you are intentionally doing a one-shot diagnostic snapshot.
2. Run the watcher script to snapshot PR/review/CI state (or consume each streamed snapshot from `--watch`).
2. Run the watcher script to snapshot PR/CI/review state (or consume each streamed snapshot from `--watch`).
3. Inspect the `actions` list in the JSON response.
4. If `diagnose_ci_failure` is present, inspect failed run logs and classify the failure.
5. If the failure is likely caused by the current branch, patch code locally, commit, and push.
6. If `process_review_comment` is present, inspect surfaced review items and decide whether to address them.
7. If a review item is actionable and correct, patch code locally, commit, push, and then mark the associated review thread/comment as resolved once the fix is on GitHub.
8. If a review item from another author is non-actionable, already addressed, or not valid, post one reply on the comment/thread explaining that decision (for example answering the question or explaining why no change is needed). If the watcher later surfaces your own reply, treat that self-authored item as already handled and do not reply again.
9. If the failure is likely flaky/unrelated and `retry_failed_checks` is present, rerun failed jobs with `--retry-failed-now`.
10. If both actionable review feedback and `retry_failed_checks` are present, prioritize review feedback first; a new commit will retrigger CI, so avoid rerunning flaky checks on the old SHA unless you intentionally defer the review change.
11. On every loop, look for newly surfaced review feedback before acting on CI failures or mergeability state, then verify mergeability / merge-conflict status (for example via `gh pr view`) alongside CI.
12. After any push or rerun action, immediately return to step 1 and continue polling on the updated SHA/state.
13. If you had been using `--watch` before pausing to patch/commit/push, relaunch `--watch` yourself in the same turn immediately after the push (do not wait for the user to re-invoke the skill).
14. Repeat polling until `stop_pr_closed` appears or a user-help-required blocker is reached. A green + review-clean + mergeable PR is a progress milestone, not a reason to stop the watcher while the PR is still open.
15. Maintain terminal/session ownership: while babysitting is active, keep consuming watcher output in the same turn; do not leave a detached `--watch` process running and then end the turn as if monitoring were complete.
7. If a review item is actionable and correct, patch code locally, commit, and push.
8. If the failure is likely flaky/unrelated and `retry_failed_checks` is present, rerun failed jobs with `--retry-failed-now`.
9. If both actionable review feedback and `retry_failed_checks` are present, prioritize review feedback first; a new commit will retrigger CI, so avoid rerunning flaky checks on the old SHA unless you intentionally defer the review change.
10. On every loop, verify mergeability / merge-conflict status (for example via `gh pr view`) in addition to CI and review state.
11. After any push or rerun action, immediately return to step 1 and continue polling on the updated SHA/state.
12. If you had been using `--watch` before pausing to patch/commit/push, relaunch `--watch` yourself in the same turn immediately after the push (do not wait for the user to re-invoke the skill).
13. Repeat polling until the PR is green + review-clean + mergeable, `stop_pr_closed` appears, or a user-help-required blocker is reached.
14. Maintain terminal/session ownership: while babysitting is active, keep consuming watcher output in the same turn; do not leave a detached `--watch` process running and then end the turn as if monitoring were complete.
## Commands
@@ -95,11 +94,10 @@ When you agree with a comment and it is actionable:
1. Patch code locally.
2. Commit with `codex: address PR review feedback (#<n>)`.
3. Push to the PR head branch.
4. After the push succeeds, mark the associated GitHub review thread/comment as resolved.
5. Resume watching on the new SHA immediately (do not stop after reporting the push).
6. If monitoring was running in `--watch` mode, restart `--watch` immediately after the push in the same turn; do not wait for the user to ask again.
4. Resume watching on the new SHA immediately (do not stop after reporting the push).
5. If monitoring was running in `--watch` mode, restart `--watch` immediately after the push in the same turn; do not wait for the user to ask again.
If you disagree or the comment is non-actionable/already addressed, reply once directly on the GitHub comment/thread so the reviewer gets an explicit answer, then continue the watcher loop. If the watcher later surfaces your own reply because the authenticated operator is treated as a trusted review author, treat that self-authored item as already handled and do not reply again.
If you disagree or the comment is non-actionable/already addressed, record it as handled by continuing the watcher loop (the script de-duplicates surfaced items via state after surfacing them).
If a code review comment/thread is already marked as resolved in GitHub, treat it as non-actionable and safely ignore it unless new unresolved follow-up feedback appears.
## Git Safety Rules
@@ -126,14 +124,13 @@ Use this loop in a live Codex session:
3. First check whether the PR is now merged or otherwise closed; if so, report that terminal state and stop polling immediately.
4. Check CI summary, new review items, and mergeability/conflict status.
5. Diagnose CI failures and classify branch-related vs flaky/unrelated.
6. For each surfaced review item from another author, either reply once with an explanation if it is non-actionable or patch/commit/push and then resolve it if it is actionable. If a later snapshot surfaces your own reply, treat it as informational and continue without responding again.
7. Process actionable review comments before flaky reruns when both are present; if a review fix requires a commit, push it and skip rerunning failed checks on the old SHA.
8. Retry failed checks only when `retry_failed_checks` is present and you are not about to replace the current SHA with a review/CI fix commit.
9. If you pushed a commit, resolved a review thread, replied to a review comment, or triggered a rerun, report the action briefly and continue polling (do not stop).
10. After a review-fix push, proactively restart continuous monitoring (`--watch`) in the same turn unless a strict stop condition has already been reached.
11. If everything is passing, mergeable, not blocked on required review approval, and there are no unaddressed review items, report that the PR is currently ready to merge but keep the watcher running so new review comments are surfaced quickly while the PR remains open.
12. If blocked on a user-help-required issue (infra outage, exhausted flaky retries, unclear reviewer request, permissions), report the blocker and stop.
13. Otherwise sleep according to the polling cadence below and repeat.
6. Process actionable review comments before flaky reruns when both are present; if a review fix requires a commit, push it and skip rerunning failed checks on the old SHA.
7. Retry failed checks only when `retry_failed_checks` is present and you are not about to replace the current SHA with a review/CI fix commit.
8. If you pushed a commit or triggered a rerun, report the action briefly and continue polling (do not stop).
9. After a review-fix push, proactively restart continuous monitoring (`--watch`) in the same turn unless a strict stop condition has already been reached.
10. If everything is passing, mergeable, not blocked on required review approval, and there are no unaddressed review items, report success and stop.
11. If blocked on a user-help-required issue (infra outage, exhausted flaky retries, unclear reviewer request, permissions), report the blocker and stop.
12. Otherwise sleep according to the polling cadence below and repeat.
When the user explicitly asks to monitor/watch/babysit a PR, prefer `--watch` so polling continues autonomously in one command. Use repeated `--once` snapshots only for debugging, local testing, or when the user explicitly asks for a one-shot check.
Do not stop to ask the user whether to continue polling; continue autonomously until a strict stop condition is met or the user explicitly interrupts.
@@ -141,18 +138,19 @@ Do not hand control back to the user after a review-fix push just because a new
If a `--watch` process is still running and no strict stop condition has been reached, the babysitting task is still in progress; keep streaming/consuming watcher output instead of ending the turn.
## Polling Cadence
Keep review polling aggressive and continue monitoring even after CI turns green:
Use adaptive polling and continue monitoring even after CI turns green:
- While CI is not green (pending/running/queued or failing): poll every 1 minute.
- After CI turns green: keep polling at the base cadence while the PR remains open so newly posted review comments are surfaced promptly instead of waiting on a long green-state backoff.
- Reset the cadence immediately whenever anything changes (new commit/SHA, check status changes, new review comments, mergeability changes, review decision changes).
- If CI stops being green again (new commit, rerun, or regression): stay on the base polling cadence.
- After CI turns green: start at every 1 minute, then back off exponentially when there is no change (for example 1m, 2m, 4m, 8m, 16m, 32m), capping at every 1 hour.
- Reset the green-state polling interval back to 1 minute whenever anything changes (new commit/SHA, check status changes, new review comments, mergeability changes, review decision changes).
- If CI stops being green again (new commit, rerun, or regression): return to 1-minute polling.
- If any poll shows the PR is merged or otherwise closed: stop polling immediately and report the terminal state.
## Stop Conditions (Strict)
Stop only when one of the following is true:
- PR merged or closed (stop as soon as a poll/snapshot confirms this).
- PR is ready to merge: CI succeeded, no surfaced unaddressed review comments, not blocked on required review approval, and no merge conflict risk.
- User intervention is required and Codex cannot safely proceed alone.
Keep polling when:
@@ -161,14 +159,14 @@ Keep polling when:
- CI is still running/queued.
- Review state is quiet but CI is not terminal.
- CI is green but mergeability is unknown/pending.
- CI is green and mergeable, but the PR is still open and you are waiting for possible new review comments or merge-conflict changes.
- The PR is green but blocked on review approval (`REVIEW_REQUIRED` / similar); continue polling at the base cadence and surface any new review comments without asking for confirmation to keep watching.
- CI is green and mergeable, but the PR is still open and you are waiting for possible new review comments or merge-conflict changes per the green-state cadence.
- The PR is green but blocked on review approval (`REVIEW_REQUIRED` / similar); continue polling on the green-state cadence and surface any new review comments without asking for confirmation to keep watching.
## Output Expectations
Provide concise progress updates while monitoring and a final summary that includes:
- During long unchanged monitoring periods, avoid emitting a full update on every poll; summarize only status changes plus occasional heartbeat updates.
- Treat push confirmations, intermediate CI snapshots, ready-to-merge snapshots, and review-action updates as progress updates only; do not emit the final summary or end the babysitting session unless a strict stop condition is met.
- Treat push confirmations, intermediate CI snapshots, and review-action updates as progress updates only; do not emit the final summary or end the babysitting session unless a strict stop condition is met.
- A user request to "monitor" is not satisfied by a couple of sample polls; remain in the loop until a strict stop condition or an explicit user interruption.
- A review-fix commit + push is not a completion event; immediately resume live monitoring (`--watch`) in the same turn and continue reporting progress updates.
- When CI first transitions to all green for the current SHA, emit a one-time celebratory progress update (do not repeat it on every green poll). Preferred style: `🚀 CI is all green! 33/33 passed. Still on watch for review approval.`

View File

@@ -1,4 +1,4 @@
interface:
display_name: "PR Babysitter"
short_description: "Watch PR review comments, CI, and merge conflicts"
default_prompt: "Babysit the current PR: monitor reviewer comments, CI, and merge-conflict status (prefer the watchers --watch mode for live monitoring); surface new review feedback before acting on CI or mergeability work, fix valid issues, push updates, and rerun flaky failures up to 3 times. Keep exactly one watcher session active for the PR (do not leave duplicate --watch terminals running). If you pause monitoring to patch review/CI feedback, restart --watch yourself immediately after the push in the same turn. If a watcher is still running and no strict stop condition has been reached, the task is still in progress: keep consuming watcher output and sending progress updates instead of ending the turn. Do not treat a green + mergeable PR as a terminal stop while it is still open; continue polling autonomously after any push/rerun so newly posted review comments are surfaced until a strict terminal stop condition is reached or the user interrupts."
short_description: "Watch PR CI, reviews, and merge conflicts"
default_prompt: "Babysit the current PR: monitor CI, reviewer comments, and merge-conflict status (prefer the watchers --watch mode for live monitoring); fix valid issues, push updates, and rerun flaky failures up to 3 times. Keep exactly one watcher session active for the PR (do not leave duplicate --watch terminals running). If you pause monitoring to patch review/CI feedback, restart --watch yourself immediately after the push in the same turn. If a watcher is still running and no strict stop condition has been reached, the task is still in progress: keep consuming watcher output and sending progress updates instead of ending the turn. Continue polling autonomously after any push/rerun until a strict terminal stop condition is reached or the user interrupts."

View File

@@ -45,6 +45,7 @@ MERGE_CONFLICT_OR_BLOCKING_STATES = {
"DRAFT",
"UNKNOWN",
}
GREEN_STATE_MAX_POLL_SECONDS = 60 * 60
class GhCommandError(RuntimeError):
@@ -577,7 +578,7 @@ def recommend_actions(pr, checks_summary, failed_runs, new_review_items, retries
return unique_actions(actions)
if is_pr_ready_to_merge(pr, checks_summary, new_review_items):
actions.append("ready_to_merge")
actions.append("stop_ready_to_merge")
return unique_actions(actions)
if new_review_items:
@@ -605,6 +606,12 @@ def collect_snapshot(args):
if not state.get("started_at"):
state["started_at"] = int(time.time())
# `gh pr checks -R <repo>` requires an explicit PR/branch/url argument.
# After resolving `--pr auto`, reuse the concrete PR number.
checks = get_pr_checks(str(pr["number"]), repo=pr["repo"])
checks_summary = summarize_checks(checks)
workflow_runs = get_workflow_runs_for_sha(pr["repo"], pr["head_sha"])
failed_runs = failed_runs_from_workflow_runs(workflow_runs, pr["head_sha"])
authenticated_login = get_authenticated_login()
new_review_items = fetch_new_review_items(
pr,
@@ -612,15 +619,6 @@ def collect_snapshot(args):
fresh_state=fresh_state,
authenticated_login=authenticated_login,
)
# Surface review feedback before drilling into CI and mergeability details.
# That keeps the babysitter responsive to new comments even when other
# actions are also available.
# `gh pr checks -R <repo>` requires an explicit PR/branch/url argument.
# After resolving `--pr auto`, reuse the concrete PR number.
checks = get_pr_checks(str(pr["number"]), repo=pr["repo"])
checks_summary = summarize_checks(checks)
workflow_runs = get_workflow_runs_for_sha(pr["repo"], pr["head_sha"])
failed_runs = failed_runs_from_workflow_runs(workflow_runs, pr["head_sha"])
retries_used = current_retry_count(state, pr["head_sha"])
actions = recommend_actions(
@@ -763,6 +761,7 @@ def run_watch(args):
if (
"stop_pr_closed" in actions
or "stop_exhausted_retries" in actions
or "stop_ready_to_merge" in actions
):
print_event("stop", {"actions": snapshot.get("actions"), "pr": snapshot.get("pr")})
return 0
@@ -770,13 +769,13 @@ def run_watch(args):
current_change_key = snapshot_change_key(snapshot)
changed = current_change_key != last_change_key
green = is_ci_green(snapshot)
pr = snapshot.get("pr") or {}
pr_open = not bool(pr.get("closed")) and not bool(pr.get("merged"))
if not green or pr_open:
if not green:
poll_seconds = args.poll_seconds
elif changed or last_change_key is None:
poll_seconds = args.poll_seconds
else:
poll_seconds = min(poll_seconds * 2, GREEN_STATE_MAX_POLL_SECONDS)
last_change_key = current_change_key
time.sleep(poll_seconds)

View File

@@ -1,155 +0,0 @@
import argparse
import importlib.util
from pathlib import Path
import pytest
MODULE_PATH = Path(__file__).with_name("gh_pr_watch.py")
MODULE_SPEC = importlib.util.spec_from_file_location("gh_pr_watch", MODULE_PATH)
gh_pr_watch = importlib.util.module_from_spec(MODULE_SPEC)
assert MODULE_SPEC.loader is not None
MODULE_SPEC.loader.exec_module(gh_pr_watch)
def sample_pr():
return {
"number": 123,
"url": "https://github.com/openai/codex/pull/123",
"repo": "openai/codex",
"head_sha": "abc123",
"head_branch": "feature",
"state": "OPEN",
"merged": False,
"closed": False,
"mergeable": "MERGEABLE",
"merge_state_status": "CLEAN",
"review_decision": "",
}
def sample_checks(**overrides):
checks = {
"pending_count": 0,
"failed_count": 0,
"passed_count": 12,
"all_terminal": True,
}
checks.update(overrides)
return checks
def test_collect_snapshot_fetches_review_items_before_ci(monkeypatch, tmp_path):
call_order = []
pr = sample_pr()
monkeypatch.setattr(gh_pr_watch, "resolve_pr", lambda *args, **kwargs: pr)
monkeypatch.setattr(gh_pr_watch, "load_state", lambda path: ({}, True))
monkeypatch.setattr(
gh_pr_watch,
"get_authenticated_login",
lambda: call_order.append("auth") or "octocat",
)
monkeypatch.setattr(
gh_pr_watch,
"fetch_new_review_items",
lambda *args, **kwargs: call_order.append("review") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"get_pr_checks",
lambda *args, **kwargs: call_order.append("checks") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"summarize_checks",
lambda checks: call_order.append("summarize") or sample_checks(),
)
monkeypatch.setattr(
gh_pr_watch,
"get_workflow_runs_for_sha",
lambda *args, **kwargs: call_order.append("workflow") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"failed_runs_from_workflow_runs",
lambda *args, **kwargs: call_order.append("failed_runs") or [],
)
monkeypatch.setattr(
gh_pr_watch,
"recommend_actions",
lambda *args, **kwargs: call_order.append("recommend") or ["idle"],
)
monkeypatch.setattr(gh_pr_watch, "save_state", lambda *args, **kwargs: None)
args = argparse.Namespace(
pr="123",
repo=None,
state_file=str(tmp_path / "watcher-state.json"),
max_flaky_retries=3,
)
gh_pr_watch.collect_snapshot(args)
assert call_order.index("review") < call_order.index("checks")
assert call_order.index("review") < call_order.index("workflow")
def test_recommend_actions_prioritizes_review_comments():
actions = gh_pr_watch.recommend_actions(
sample_pr(),
sample_checks(failed_count=1),
[{"run_id": 99}],
[{"kind": "review_comment", "id": "1"}],
0,
3,
)
assert actions == [
"process_review_comment",
"diagnose_ci_failure",
"retry_failed_checks",
]
def test_run_watch_keeps_polling_open_ready_to_merge_pr(monkeypatch):
sleeps = []
events = []
snapshot = {
"pr": sample_pr(),
"checks": sample_checks(),
"failed_runs": [],
"new_review_items": [],
"actions": ["ready_to_merge"],
"retry_state": {
"current_sha_retries_used": 0,
"max_flaky_retries": 3,
},
}
monkeypatch.setattr(
gh_pr_watch,
"collect_snapshot",
lambda args: (snapshot, Path("/tmp/codex-babysit-pr-state.json")),
)
monkeypatch.setattr(
gh_pr_watch,
"print_event",
lambda event, payload: events.append((event, payload)),
)
class StopWatch(Exception):
pass
def fake_sleep(seconds):
sleeps.append(seconds)
if len(sleeps) >= 2:
raise StopWatch
monkeypatch.setattr(gh_pr_watch.time, "sleep", fake_sleep)
with pytest.raises(StopWatch):
gh_pr_watch.run_watch(argparse.Namespace(poll_seconds=30))
assert sleeps == [30, 30]
assert [event for event, _ in events] == ["snapshot", "snapshot"]

View File

@@ -1,16 +0,0 @@
---
name: remote-tests
description: How to run tests using remote executor.
---
Some codex integration tests support a running against a remote executor.
This means that when CODEX_TEST_REMOTE_ENV environment variable is set they will attempt to start an executor process in a docker container CODEX_TEST_REMOTE_ENV points to and use it in tests.
Docker container is built and initialized via ./scripts/test-remote-env.sh
Currently running remote tests is only supported on Linux, so you need to use a devbox to run them
You can list devboxes via `applied_devbox ls`, pick the one with `codex` in the name.
Connect to devbox via `ssh <devbox_name>`.
Reuse the same checkout of codex in `~/code/codex`. Reset files if needed. Multiple checkouts take longer to build and take up more space.
Check whether the SHA and modified files are in sync between remote and local.

View File

@@ -17,7 +17,6 @@ runs:
- name: Cosign Linux artifacts
shell: bash
env:
ARTIFACTS_DIR: ${{ inputs.artifacts-dir }}
COSIGN_EXPERIMENTAL: "1"
COSIGN_YES: "true"
COSIGN_OIDC_CLIENT_ID: "sigstore"
@@ -25,7 +24,7 @@ runs:
run: |
set -euo pipefail
dest="$ARTIFACTS_DIR"
dest="${{ inputs.artifacts-dir }}"
if [[ ! -d "$dest" ]]; then
echo "Destination $dest does not exist"
exit 1

View File

@@ -117,8 +117,6 @@ runs:
- name: Sign macOS binaries
if: ${{ inputs.sign-binaries == 'true' }}
shell: bash
env:
TARGET: ${{ inputs.target }}
run: |
set -euo pipefail
@@ -132,18 +130,15 @@ runs:
keychain_args+=(--keychain "${APPLE_CODESIGN_KEYCHAIN}")
fi
entitlements_path="$GITHUB_ACTION_PATH/codex.entitlements.plist"
for binary in codex codex-responses-api-proxy; do
path="codex-rs/target/${TARGET}/release/${binary}"
codesign --force --options runtime --timestamp --entitlements "$entitlements_path" --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$path"
path="codex-rs/target/${{ inputs.target }}/release/${binary}"
codesign --force --options runtime --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$path"
done
- name: Notarize macOS binaries
if: ${{ inputs.sign-binaries == 'true' }}
shell: bash
env:
TARGET: ${{ inputs.target }}
APPLE_NOTARIZATION_KEY_P8: ${{ inputs.apple-notarization-key-p8 }}
APPLE_NOTARIZATION_KEY_ID: ${{ inputs.apple-notarization-key-id }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ inputs.apple-notarization-issuer-id }}
@@ -168,7 +163,7 @@ runs:
notarize_binary() {
local binary="$1"
local source_path="codex-rs/target/${TARGET}/release/${binary}"
local source_path="codex-rs/target/${{ inputs.target }}/release/${binary}"
local archive_path="${RUNNER_TEMP}/${binary}.zip"
if [[ ! -f "$source_path" ]]; then
@@ -189,7 +184,6 @@ runs:
if: ${{ inputs.sign-dmg == 'true' }}
shell: bash
env:
TARGET: ${{ inputs.target }}
APPLE_NOTARIZATION_KEY_P8: ${{ inputs.apple-notarization-key-p8 }}
APPLE_NOTARIZATION_KEY_ID: ${{ inputs.apple-notarization-key-id }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ inputs.apple-notarization-issuer-id }}
@@ -212,8 +206,7 @@ runs:
source "$GITHUB_ACTION_PATH/notary_helpers.sh"
dmg_name="codex-${TARGET}.dmg"
dmg_path="codex-rs/target/${TARGET}/release/${dmg_name}"
dmg_path="codex-rs/target/${{ inputs.target }}/release/codex-${{ inputs.target }}.dmg"
if [[ ! -f "$dmg_path" ]]; then
echo "dmg $dmg_path not found"
@@ -226,7 +219,7 @@ runs:
fi
codesign --force --timestamp --sign "$APPLE_CODESIGN_IDENTITY" "${keychain_args[@]}" "$dmg_path"
notarize_submission "$dmg_name" "$dmg_path" "$notary_key_path"
notarize_submission "codex-${{ inputs.target }}.dmg" "$dmg_path" "$notary_key_path"
xcrun stapler staple "$dmg_path"
- name: Remove signing keychain

View File

@@ -1,8 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.cs.allow-jit</key>
<true/>
</dict>
</plist>

View File

@@ -1,133 +0,0 @@
name: setup-bazel-ci
description: Prepare a Bazel CI runner with shared caches and optional test prerequisites.
inputs:
target:
description: Target triple used for cache namespacing.
required: true
install-test-prereqs:
description: Install Node.js and DotSlash for Bazel-backed test jobs.
required: false
default: "false"
outputs:
cache-hit:
description: Whether the Bazel repository cache key was restored exactly.
value: ${{ steps.cache_bazel_repository_restore.outputs.cache-hit }}
runs:
using: composite
steps:
- name: Set up Node.js for js_repl tests
if: inputs.install-test-prereqs == 'true'
uses: actions/setup-node@v6
with:
node-version-file: codex-rs/node-version.txt
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
if: inputs.install-test-prereqs == 'true'
uses: facebook/install-dotslash@v2
- name: Make DotSlash available in PATH (Unix)
if: inputs.install-test-prereqs == 'true' && runner.os != 'Windows'
shell: bash
run: cp "$(which dotslash)" /usr/local/bin
- name: Make DotSlash available in PATH (Windows)
if: inputs.install-test-prereqs == 'true' && runner.os == 'Windows'
shell: pwsh
run: Copy-Item (Get-Command dotslash).Source -Destination "$env:LOCALAPPDATA\Microsoft\WindowsApps\dotslash.exe"
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v3
# Restore bazel repository cache so we don't have to redownload all the external dependencies
# on every CI run.
- name: Restore bazel repository cache
id: cache_bazel_repository_restore
uses: actions/cache/restore@v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-${{ inputs.target }}-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}
restore-keys: |
bazel-cache-${{ inputs.target }}
- name: Configure Bazel output root (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Use the shortest available drive to reduce argv/path length issues,
# but avoid the drive root because some Windows test launchers mis-handle
# MANIFEST paths there.
$hasDDrive = Test-Path 'D:\'
$bazelOutputUserRoot = if ($hasDDrive) { 'D:\b' } else { 'C:\b' }
$repoContentsCache = Join-Path $env:RUNNER_TEMP "bazel-repo-contents-cache-$env:GITHUB_RUN_ID-$env:GITHUB_JOB"
"BAZEL_OUTPUT_USER_ROOT=$bazelOutputUserRoot" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
"BAZEL_REPO_CONTENTS_CACHE=$repoContentsCache" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
if (-not $hasDDrive) {
$repositoryCache = Join-Path $env:USERPROFILE '.cache\bazel-repo-cache'
"BAZEL_REPOSITORY_CACHE=$repositoryCache" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
}
- name: Expose MSVC SDK environment (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Bazel exec-side Rust build scripts do not reliably inherit the MSVC developer
# shell on GitHub-hosted Windows runners, so discover the latest VS install and
# ask `VsDevCmd.bat` to materialize the x64/x64 compiler + SDK environment.
$vswhere = "${env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\vswhere.exe"
if (-not (Test-Path $vswhere)) {
throw "vswhere.exe not found"
}
$installPath = & $vswhere -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath 2>$null
if (-not $installPath) {
throw "Could not locate a Visual Studio installation with VC tools"
}
$vsDevCmd = Join-Path $installPath 'Common7\Tools\VsDevCmd.bat'
if (-not (Test-Path $vsDevCmd)) {
throw "VsDevCmd.bat not found at $vsDevCmd"
}
# Keep the export surface explicit: these are the paths and SDK roots that the
# MSVC toolchain probes need later when Bazel runs Windows exec-platform build
# scripts such as `aws-lc-sys`.
$varsToExport = @(
'INCLUDE',
'LIB',
'LIBPATH',
'PATH',
'UCRTVersion',
'UniversalCRTSdkDir',
'VCINSTALLDIR',
'VCToolsInstallDir',
'WindowsLibPath',
'WindowsSdkBinPath',
'WindowsSdkDir',
'WindowsSDKLibVersion',
'WindowsSDKVersion'
)
# `VsDevCmd.bat` is a batch file, so invoke it under `cmd.exe`, suppress its
# banner, then dump the resulting environment with `set`. Re-export only the
# approved keys into `GITHUB_ENV` so later steps inherit the same MSVC context.
$envLines = & cmd.exe /c ('"{0}" -no_logo -arch=x64 -host_arch=x64 >nul && set' -f $vsDevCmd)
foreach ($line in $envLines) {
if ($line -notmatch '^(.*?)=(.*)$') {
continue
}
$name = $matches[1]
$value = $matches[2]
if ($varsToExport -contains $name) {
"$name=$value" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
}
}
- name: Enable Git long paths (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: git config --global core.longpaths true

View File

@@ -1,9 +0,0 @@
# Paths are matched exactly, relative to the repository root.
# Keep this list short and limited to intentional large checked-in assets.
.github/codex-cli-splash.png
MODULE.bazel.lock
codex-rs/app-server-protocol/schema/json/codex_app_server_protocol.schemas.json
codex-rs/app-server-protocol/schema/json/codex_app_server_protocol.v2.schemas.json
codex-rs/tui/tests/fixtures/oss-story.jsonl
codex-rs/tui_app_server/tests/fixtures/oss-story.jsonl

View File

@@ -1,24 +0,0 @@
{
"outputs": {
"argument-comment-lint": {
"platforms": {
"macos-aarch64": {
"regex": "^argument-comment-lint-aarch64-apple-darwin\\.tar\\.gz$",
"path": "argument-comment-lint/bin/argument-comment-lint"
},
"linux-x86_64": {
"regex": "^argument-comment-lint-x86_64-unknown-linux-gnu\\.tar\\.gz$",
"path": "argument-comment-lint/bin/argument-comment-lint"
},
"linux-aarch64": {
"regex": "^argument-comment-lint-aarch64-unknown-linux-gnu\\.tar\\.gz$",
"path": "argument-comment-lint/bin/argument-comment-lint"
},
"windows-x86_64": {
"regex": "^argument-comment-lint-x86_64-pc-windows-msvc\\.zip$",
"path": "argument-comment-lint/bin/argument-comment-lint.exe"
}
}
}
}
}

View File

@@ -1,23 +0,0 @@
{
"outputs": {
"codex-zsh": {
"platforms": {
"macos-aarch64": {
"name": "codex-zsh-aarch64-apple-darwin.tar.gz",
"format": "tar.gz",
"path": "codex-zsh/bin/zsh"
},
"linux-x86_64": {
"name": "codex-zsh-x86_64-unknown-linux-musl.tar.gz",
"format": "tar.gz",
"path": "codex-zsh/bin/zsh"
},
"linux-aarch64": {
"name": "codex-zsh-aarch64-unknown-linux-musl.tar.gz",
"format": "tar.gz",
"path": "codex-zsh/bin/zsh"
}
}
}
}
}

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
if [[ "$#" -ne 1 ]]; then
echo "usage: $0 <archive-path>" >&2
exit 1
fi
archive_path="$1"
workspace="${GITHUB_WORKSPACE:?missing GITHUB_WORKSPACE}"
zsh_commit="${ZSH_COMMIT:?missing ZSH_COMMIT}"
zsh_patch="${ZSH_PATCH:?missing ZSH_PATCH}"
temp_root="${RUNNER_TEMP:-/tmp}"
work_root="$(mktemp -d "${temp_root%/}/codex-zsh-release.XXXXXX")"
trap 'rm -rf "$work_root"' EXIT
source_root="${work_root}/zsh"
package_root="${work_root}/codex-zsh"
wrapper_path="${work_root}/exec-wrapper"
stdout_path="${work_root}/stdout.txt"
wrapper_log_path="${work_root}/wrapper.log"
git clone https://git.code.sf.net/p/zsh/code "$source_root"
cd "$source_root"
git checkout "$zsh_commit"
git apply "${workspace}/${zsh_patch}"
./Util/preconfig
./configure
cores="$(command -v nproc >/dev/null 2>&1 && nproc || getconf _NPROCESSORS_ONLN)"
make -j"${cores}"
cat > "$wrapper_path" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
: "${CODEX_WRAPPER_LOG:?missing CODEX_WRAPPER_LOG}"
printf '%s\n' "$@" > "$CODEX_WRAPPER_LOG"
file="$1"
shift
if [[ "$#" -eq 0 ]]; then
exec "$file"
fi
arg0="$1"
shift
exec -a "$arg0" "$file" "$@"
EOF
chmod +x "$wrapper_path"
CODEX_WRAPPER_LOG="$wrapper_log_path" \
EXEC_WRAPPER="$wrapper_path" \
"${source_root}/Src/zsh" -fc '/bin/echo smoke-zsh' > "$stdout_path"
grep -Fx "smoke-zsh" "$stdout_path"
grep -Fx "/bin/echo" "$wrapper_log_path"
mkdir -p "$package_root/bin" "$(dirname "${workspace}/${archive_path}")"
cp "${source_root}/Src/zsh" "$package_root/bin/zsh"
chmod +x "$package_root/bin/zsh"
(cd "$work_root" && tar -czf "${workspace}/${archive_path}" codex-zsh)

View File

@@ -1,115 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
ci_config=ci-linux
case "${RUNNER_OS:-}" in
macOS)
ci_config=ci-macos
;;
Windows)
ci_config=ci-windows
;;
esac
bazel_lint_args=("$@")
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
has_host_platform_override=0
for arg in "${bazel_lint_args[@]}"; do
if [[ "$arg" == --host_platform=* ]]; then
has_host_platform_override=1
break
fi
done
if [[ $has_host_platform_override -eq 0 ]]; then
# The nightly Windows lint toolchain is registered with an MSVC exec
# platform even though the lint target platform stays on `windows-gnullvm`.
# Override the host platform here so the exec-side helper binaries actually
# match the registered toolchain set.
bazel_lint_args+=("--host_platform=//:local_windows_msvc")
fi
# Native Windows lint runs need exec-side Rust helper binaries and proc-macros
# to use rust-lld instead of the C++ linker path. The default `none`
# preference resolves to `cc` when a cc_toolchain is present, which currently
# routes these exec actions through clang++ with an argument shape it cannot
# consume.
bazel_lint_args+=("--@rules_rust//rust/settings:toolchain_linker_preference=rust")
# Some Rust top-level targets are still intentionally incompatible with the
# local Windows MSVC exec platform. Skip those explicit targets so the native
# lint aspect can run across the compatible crate graph instead of failing the
# whole build after analysis.
bazel_lint_args+=("--skip_incompatible_explicit_targets")
fi
bazel_startup_args=()
if [[ -n "${BAZEL_OUTPUT_USER_ROOT:-}" ]]; then
bazel_startup_args+=("--output_user_root=${BAZEL_OUTPUT_USER_ROOT}")
fi
run_bazel() {
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
MSYS2_ARG_CONV_EXCL='*' bazel "$@"
return
fi
bazel "$@"
}
run_bazel_with_startup_args() {
if [[ ${#bazel_startup_args[@]} -gt 0 ]]; then
run_bazel "${bazel_startup_args[@]}" "$@"
return
fi
run_bazel "$@"
}
read_query_labels() {
local query="$1"
local query_stdout
local query_stderr
query_stdout="$(mktemp)"
query_stderr="$(mktemp)"
if ! run_bazel_with_startup_args \
--noexperimental_remote_repo_contents_cache \
query \
--keep_going \
--output=label \
"$query" >"$query_stdout" 2>"$query_stderr"; then
cat "$query_stderr" >&2
rm -f "$query_stdout" "$query_stderr"
exit 1
fi
cat "$query_stdout"
rm -f "$query_stdout" "$query_stderr"
}
final_build_targets=(//codex-rs/...)
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
# Bazel's local Windows platform currently lacks a default test toolchain for
# `rust_test`, so target the concrete Rust crate rules directly. The lint
# aspect still walks their crate graph, which preserves incremental reuse for
# non-test code while avoiding non-Rust wrapper targets such as platform_data.
final_build_targets=()
while IFS= read -r label; do
[[ -n "$label" ]] || continue
final_build_targets+=("$label")
done < <(read_query_labels 'kind("rust_(library|binary|proc_macro) rule", //codex-rs/...)')
if [[ ${#final_build_targets[@]} -eq 0 ]]; then
echo "Failed to discover Windows Bazel lint targets." >&2
exit 1
fi
fi
./.github/scripts/run-bazel-ci.sh \
-- \
build \
"${bazel_lint_args[@]}" \
-- \
"${final_build_targets[@]}"

View File

@@ -1,246 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
print_failed_bazel_test_logs=0
use_node_test_env=0
remote_download_toplevel=0
while [[ $# -gt 0 ]]; do
case "$1" in
--print-failed-test-logs)
print_failed_bazel_test_logs=1
shift
;;
--use-node-test-env)
use_node_test_env=1
shift
;;
--remote-download-toplevel)
remote_download_toplevel=1
shift
;;
--)
shift
break
;;
*)
echo "Unknown option: $1" >&2
exit 1
;;
esac
done
if [[ $# -eq 0 ]]; then
echo "Usage: $0 [--print-failed-test-logs] [--use-node-test-env] [--remote-download-toplevel] -- <bazel args> -- <targets>" >&2
exit 1
fi
bazel_startup_args=()
if [[ -n "${BAZEL_OUTPUT_USER_ROOT:-}" ]]; then
bazel_startup_args+=("--output_user_root=${BAZEL_OUTPUT_USER_ROOT}")
fi
run_bazel() {
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
MSYS2_ARG_CONV_EXCL='*' bazel "$@"
return
fi
bazel "$@"
}
ci_config=ci-linux
case "${RUNNER_OS:-}" in
macOS)
ci_config=ci-macos
;;
Windows)
ci_config=ci-windows
;;
esac
print_bazel_test_log_tails() {
local console_log="$1"
local testlogs_dir
local -a bazel_info_cmd=(bazel)
if (( ${#bazel_startup_args[@]} > 0 )); then
bazel_info_cmd+=("${bazel_startup_args[@]}")
fi
testlogs_dir="$(run_bazel "${bazel_info_cmd[@]:1}" info bazel-testlogs 2>/dev/null || echo bazel-testlogs)"
local failed_targets=()
while IFS= read -r target; do
failed_targets+=("$target")
done < <(
grep -E '^FAIL: //' "$console_log" \
| sed -E 's#^FAIL: (//[^ ]+).*#\1#' \
| sort -u
)
if [[ ${#failed_targets[@]} -eq 0 ]]; then
echo "No failed Bazel test targets were found in console output."
return
fi
for target in "${failed_targets[@]}"; do
local rel_path="${target#//}"
rel_path="${rel_path/:/\/}"
local test_log="${testlogs_dir}/${rel_path}/test.log"
echo "::group::Bazel test log tail for ${target}"
if [[ -f "$test_log" ]]; then
tail -n 200 "$test_log"
else
echo "Missing test log: $test_log"
fi
echo "::endgroup::"
done
}
bazel_args=()
bazel_targets=()
found_target_separator=0
for arg in "$@"; do
if [[ "$arg" == "--" && $found_target_separator -eq 0 ]]; then
found_target_separator=1
continue
fi
if [[ $found_target_separator -eq 0 ]]; then
bazel_args+=("$arg")
else
bazel_targets+=("$arg")
fi
done
if [[ ${#bazel_args[@]} -eq 0 || ${#bazel_targets[@]} -eq 0 ]]; then
echo "Expected Bazel args and targets separated by --" >&2
exit 1
fi
if [[ $use_node_test_env -eq 1 && "${RUNNER_OS:-}" != "Windows" ]]; then
# Bazel test sandboxes on macOS may resolve an older Homebrew `node`
# before the `actions/setup-node` runtime on PATH.
node_bin="$(which node)"
bazel_args+=("--test_env=CODEX_JS_REPL_NODE_PATH=${node_bin}")
fi
post_config_bazel_args=()
if [[ $remote_download_toplevel -eq 1 ]]; then
# Override the CI config's remote_download_minimal setting when callers need
# the built artifact to exist on disk after the command completes.
post_config_bazel_args+=(--remote_download_toplevel)
fi
if [[ -n "${BAZEL_REPO_CONTENTS_CACHE:-}" ]]; then
# Windows self-hosted runners can run multiple Bazel jobs concurrently. Give
# each job its own repo contents cache so they do not fight over the shared
# path configured in `ci-windows`.
post_config_bazel_args+=("--repo_contents_cache=${BAZEL_REPO_CONTENTS_CACHE}")
fi
if [[ -n "${BAZEL_REPOSITORY_CACHE:-}" ]]; then
post_config_bazel_args+=("--repository_cache=${BAZEL_REPOSITORY_CACHE}")
fi
if [[ "${RUNNER_OS:-}" == "Windows" ]]; then
windows_action_env_vars=(
INCLUDE
LIB
LIBPATH
PATH
UCRTVersion
UniversalCRTSdkDir
VCINSTALLDIR
VCToolsInstallDir
WindowsLibPath
WindowsSdkBinPath
WindowsSdkDir
WindowsSDKLibVersion
WindowsSDKVersion
)
for env_var in "${windows_action_env_vars[@]}"; do
if [[ -n "${!env_var:-}" ]]; then
post_config_bazel_args+=("--action_env=${env_var}" "--host_action_env=${env_var}")
fi
done
fi
bazel_console_log="$(mktemp)"
trap 'rm -f "$bazel_console_log"' EXIT
bazel_cmd=(bazel)
if (( ${#bazel_startup_args[@]} > 0 )); then
bazel_cmd+=("${bazel_startup_args[@]}")
fi
if [[ -n "${BUILDBUDDY_API_KEY:-}" ]]; then
echo "BuildBuddy API key is available; using remote Bazel configuration."
# Work around Bazel 9 remote repo contents cache / overlay materialization failures
# seen in CI (for example "is not a symlink" or permission errors while
# materializing external repos such as rules_perl). We still use BuildBuddy for
# remote execution/cache; this only disables the startup-level repo contents cache.
bazel_run_args=(
"${bazel_args[@]}"
"--config=${ci_config}"
"--remote_header=x-buildbuddy-api-key=${BUILDBUDDY_API_KEY}"
)
if (( ${#post_config_bazel_args[@]} > 0 )); then
bazel_run_args+=("${post_config_bazel_args[@]}")
fi
set +e
run_bazel "${bazel_cmd[@]:1}" \
--noexperimental_remote_repo_contents_cache \
"${bazel_run_args[@]}" \
-- \
"${bazel_targets[@]}" \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
else
echo "BuildBuddy API key is not available; using local Bazel configuration."
# Keep fork/community PRs on Bazel but disable remote services that are
# configured in .bazelrc and require auth.
#
# Flag docs:
# - Command-line reference: https://bazel.build/reference/command-line-reference
# - Remote caching overview: https://bazel.build/remote/caching
# - Remote execution overview: https://bazel.build/remote/rbe
# - Build Event Protocol overview: https://bazel.build/remote/bep
#
# --noexperimental_remote_repo_contents_cache:
# disable remote repo contents cache enabled in .bazelrc startup options.
# https://bazel.build/reference/command-line-reference#startup_options-flag--experimental_remote_repo_contents_cache
# --remote_cache= and --remote_executor=:
# clear remote cache/execution endpoints configured in .bazelrc.
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_cache
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_executor
bazel_run_args=(
"${bazel_args[@]}"
--remote_cache=
--remote_executor=
)
if (( ${#post_config_bazel_args[@]} > 0 )); then
bazel_run_args+=("${post_config_bazel_args[@]}")
fi
set +e
run_bazel "${bazel_cmd[@]:1}" \
--noexperimental_remote_repo_contents_cache \
"${bazel_run_args[@]}" \
-- \
"${bazel_targets[@]}" \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
fi
if [[ ${bazel_status:-0} -ne 0 ]]; then
if [[ $print_failed_bazel_test_logs -eq 1 ]]; then
print_bazel_test_log_tails "$bazel_console_log"
fi
exit "$bazel_status"
fi

View File

@@ -1,287 +0,0 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import gzip
import re
import shutil
import subprocess
import sys
import tempfile
import tomllib
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
MUSL_RUNTIME_ARCHIVE_LABELS = [
"@llvm//runtimes/libcxx:libcxx.static",
"@llvm//runtimes/libcxx:libcxxabi.static",
]
LLVM_AR_LABEL = "@llvm//tools:llvm-ar"
LLVM_RANLIB_LABEL = "@llvm//tools:llvm-ranlib"
def bazel_execroot() -> Path:
result = subprocess.run(
["bazel", "info", "execution_root"],
cwd=ROOT,
check=True,
capture_output=True,
text=True,
)
return Path(result.stdout.strip())
def bazel_output_base() -> Path:
result = subprocess.run(
["bazel", "info", "output_base"],
cwd=ROOT,
check=True,
capture_output=True,
text=True,
)
return Path(result.stdout.strip())
def bazel_output_path(path: str) -> Path:
if path.startswith("external/"):
return bazel_output_base() / path
return bazel_execroot() / path
def bazel_output_files(
platform: str,
labels: list[str],
compilation_mode: str = "fastbuild",
) -> list[Path]:
expression = "set(" + " ".join(labels) + ")"
result = subprocess.run(
[
"bazel",
"cquery",
"-c",
compilation_mode,
f"--platforms=@llvm//platforms:{platform}",
"--output=files",
expression,
],
cwd=ROOT,
check=True,
capture_output=True,
text=True,
)
return [bazel_output_path(line.strip()) for line in result.stdout.splitlines() if line.strip()]
def bazel_build(
platform: str,
labels: list[str],
compilation_mode: str = "fastbuild",
) -> None:
subprocess.run(
[
"bazel",
"build",
"-c",
compilation_mode,
f"--platforms=@llvm//platforms:{platform}",
*labels,
],
cwd=ROOT,
check=True,
)
def ensure_bazel_output_files(
platform: str,
labels: list[str],
compilation_mode: str = "fastbuild",
) -> list[Path]:
outputs = bazel_output_files(platform, labels, compilation_mode)
if all(path.exists() for path in outputs):
return outputs
bazel_build(platform, labels, compilation_mode)
outputs = bazel_output_files(platform, labels, compilation_mode)
missing = [str(path) for path in outputs if not path.exists()]
if missing:
raise SystemExit(f"missing built outputs for {labels}: {missing}")
return outputs
def release_pair_label(target: str) -> str:
target_suffix = target.replace("-", "_")
return f"//third_party/v8:rusty_v8_release_pair_{target_suffix}"
def resolved_v8_crate_version() -> str:
cargo_lock = tomllib.loads((ROOT / "codex-rs" / "Cargo.lock").read_text())
versions = sorted(
{
package["version"]
for package in cargo_lock["package"]
if package["name"] == "v8"
}
)
if len(versions) == 1:
return versions[0]
if len(versions) > 1:
raise SystemExit(f"expected exactly one resolved v8 version, found: {versions}")
module_bazel = (ROOT / "MODULE.bazel").read_text()
matches = sorted(
set(
re.findall(
r'https://static\.crates\.io/crates/v8/v8-([0-9]+\.[0-9]+\.[0-9]+)\.crate',
module_bazel,
)
)
)
if len(matches) != 1:
raise SystemExit(
"expected exactly one pinned v8 crate version in MODULE.bazel, "
f"found: {matches}"
)
return matches[0]
def staged_archive_name(target: str, source_path: Path) -> str:
if source_path.suffix == ".lib":
return f"rusty_v8_release_{target}.lib.gz"
return f"librusty_v8_release_{target}.a.gz"
def is_musl_archive_target(target: str, source_path: Path) -> bool:
return target.endswith("-unknown-linux-musl") and source_path.suffix == ".a"
def single_bazel_output_file(
platform: str,
label: str,
compilation_mode: str = "fastbuild",
) -> Path:
outputs = ensure_bazel_output_files(platform, [label], compilation_mode)
if len(outputs) != 1:
raise SystemExit(f"expected exactly one output for {label}, found {outputs}")
return outputs[0]
def merged_musl_archive(
platform: str,
lib_path: Path,
compilation_mode: str = "fastbuild",
) -> Path:
llvm_ar = single_bazel_output_file(platform, LLVM_AR_LABEL, compilation_mode)
llvm_ranlib = single_bazel_output_file(platform, LLVM_RANLIB_LABEL, compilation_mode)
runtime_archives = [
single_bazel_output_file(platform, label, compilation_mode)
for label in MUSL_RUNTIME_ARCHIVE_LABELS
]
temp_dir = Path(tempfile.mkdtemp(prefix="rusty-v8-musl-stage-"))
merged_archive = temp_dir / lib_path.name
merge_commands = "\n".join(
[
f"create {merged_archive}",
f"addlib {lib_path}",
*[f"addlib {archive}" for archive in runtime_archives],
"save",
"end",
]
)
subprocess.run(
[str(llvm_ar), "-M"],
cwd=ROOT,
check=True,
input=merge_commands,
text=True,
)
subprocess.run([str(llvm_ranlib), str(merged_archive)], cwd=ROOT, check=True)
return merged_archive
def stage_release_pair(
platform: str,
target: str,
output_dir: Path,
compilation_mode: str = "fastbuild",
) -> None:
outputs = ensure_bazel_output_files(
platform,
[release_pair_label(target)],
compilation_mode,
)
try:
lib_path = next(path for path in outputs if path.suffix in {".a", ".lib"})
except StopIteration as exc:
raise SystemExit(f"missing static library output for {target}") from exc
try:
binding_path = next(path for path in outputs if path.suffix == ".rs")
except StopIteration as exc:
raise SystemExit(f"missing Rust binding output for {target}") from exc
output_dir.mkdir(parents=True, exist_ok=True)
staged_library = output_dir / staged_archive_name(target, lib_path)
staged_binding = output_dir / f"src_binding_release_{target}.rs"
source_archive = (
merged_musl_archive(platform, lib_path, compilation_mode)
if is_musl_archive_target(target, lib_path)
else lib_path
)
with source_archive.open("rb") as src, staged_library.open("wb") as dst:
with gzip.GzipFile(
filename="",
mode="wb",
fileobj=dst,
compresslevel=6,
mtime=0,
) as gz:
shutil.copyfileobj(src, gz)
shutil.copyfile(binding_path, staged_binding)
print(staged_library)
print(staged_binding)
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest="command", required=True)
stage_release_pair_parser = subparsers.add_parser("stage-release-pair")
stage_release_pair_parser.add_argument("--platform", required=True)
stage_release_pair_parser.add_argument("--target", required=True)
stage_release_pair_parser.add_argument("--output-dir", required=True)
stage_release_pair_parser.add_argument(
"--compilation-mode",
default="fastbuild",
choices=["fastbuild", "opt", "dbg"],
)
subparsers.add_parser("resolved-v8-crate-version")
return parser.parse_args()
def main() -> int:
args = parse_args()
if args.command == "stage-release-pair":
stage_release_pair(
platform=args.platform,
target=args.target,
output_dir=Path(args.output_dir),
compilation_mode=args.compilation_mode,
)
return 0
if args.command == "resolved-v8-crate-version":
print(resolved_v8_crate_version())
return 0
raise SystemExit(f"unsupported command: {args.command}")
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,234 +0,0 @@
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import re
import sys
import tomllib
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
DEFAULT_CARGO_TOML = ROOT / "codex-rs" / "Cargo.toml"
DEFAULT_BAZELRC = ROOT / ".bazelrc"
BAZEL_CLIPPY_FLAG_PREFIX = "build:clippy --@rules_rust//rust/settings:clippy_flag="
BAZEL_SPECIAL_FLAGS = {"-Dwarnings"}
VALID_LEVELS = {"allow", "warn", "deny", "forbid"}
LONG_FLAG_RE = re.compile(
r"^--(?P<level>allow|warn|deny|forbid)=clippy::(?P<lint>[a-z0-9_]+)$"
)
SHORT_FLAG_RE = re.compile(r"^-(?P<level>[AWDF])clippy::(?P<lint>[a-z0-9_]+)$")
SHORT_LEVEL_NAMES = {
"A": "allow",
"W": "warn",
"D": "deny",
"F": "forbid",
}
def main() -> int:
parser = argparse.ArgumentParser(
description=(
"Verify that Bazel clippy flags in .bazelrc stay in sync with "
"codex-rs/Cargo.toml [workspace.lints.clippy]."
)
)
parser.add_argument(
"--cargo-toml",
type=Path,
default=DEFAULT_CARGO_TOML,
help="Path to the workspace Cargo.toml to inspect.",
)
parser.add_argument(
"--bazelrc",
type=Path,
default=DEFAULT_BAZELRC,
help="Path to the .bazelrc file to inspect.",
)
args = parser.parse_args()
cargo_toml = args.cargo_toml.resolve()
bazelrc = args.bazelrc.resolve()
cargo_lints = load_workspace_clippy_lints(cargo_toml)
bazel_lints = load_bazel_clippy_lints(bazelrc)
missing = sorted(cargo_lints.keys() - bazel_lints.keys())
extra = sorted(bazel_lints.keys() - cargo_lints.keys())
mismatched = sorted(
lint
for lint in cargo_lints.keys() & bazel_lints.keys()
if cargo_lints[lint] != bazel_lints[lint]
)
if missing or extra or mismatched:
print_sync_error(
cargo_toml=cargo_toml,
bazelrc=bazelrc,
cargo_lints=cargo_lints,
bazel_lints=bazel_lints,
missing=missing,
extra=extra,
mismatched=mismatched,
)
return 1
print(
"Bazel clippy flags in "
f"{display_path(bazelrc)} match "
f"{display_path(cargo_toml)} [workspace.lints.clippy]."
)
return 0
def load_workspace_clippy_lints(cargo_toml: Path) -> dict[str, str]:
workspace = tomllib.loads(cargo_toml.read_text())["workspace"]
clippy_lints = workspace["lints"]["clippy"]
parsed: dict[str, str] = {}
for lint, level in clippy_lints.items():
if not isinstance(level, str):
raise SystemExit(
f"expected string lint level for clippy::{lint} in {cargo_toml}, got {level!r}"
)
normalized = level.strip().lower()
if normalized not in VALID_LEVELS:
raise SystemExit(
f"unsupported lint level {level!r} for clippy::{lint} in {cargo_toml}"
)
parsed[lint] = normalized
return parsed
def load_bazel_clippy_lints(bazelrc: Path) -> dict[str, str]:
parsed: dict[str, str] = {}
line_numbers: dict[str, int] = {}
for lineno, line in enumerate(bazelrc.read_text().splitlines(), start=1):
if not line.startswith(BAZEL_CLIPPY_FLAG_PREFIX):
continue
flag = line.removeprefix(BAZEL_CLIPPY_FLAG_PREFIX).strip()
if flag in BAZEL_SPECIAL_FLAGS:
continue
parsed_flag = parse_bazel_lint_flag(flag)
if parsed_flag is None:
continue
lint, level = parsed_flag
if lint in parsed:
raise SystemExit(
f"duplicate Bazel clippy entry for clippy::{lint} at "
f"{bazelrc}:{line_numbers[lint]} and {bazelrc}:{lineno}"
)
parsed[lint] = level
line_numbers[lint] = lineno
return parsed
def parse_bazel_lint_flag(flag: str) -> tuple[str, str] | None:
long_match = LONG_FLAG_RE.match(flag)
if long_match:
return long_match["lint"], long_match["level"]
short_match = SHORT_FLAG_RE.match(flag)
if short_match:
return short_match["lint"], SHORT_LEVEL_NAMES[short_match["level"]]
return None
def print_sync_error(
*,
cargo_toml: Path,
bazelrc: Path,
cargo_lints: dict[str, str],
bazel_lints: dict[str, str],
missing: list[str],
extra: list[str],
mismatched: list[str],
) -> None:
cargo_toml_display = display_path(cargo_toml)
bazelrc_display = display_path(bazelrc)
example_manifest = find_workspace_lints_example_manifest()
print(
"ERROR: Bazel clippy flags are out of sync with Cargo workspace clippy lints.",
file=sys.stderr,
)
print(file=sys.stderr)
print(
f"Cargo defines the source of truth in {cargo_toml_display} "
"[workspace.lints.clippy].",
file=sys.stderr,
)
if example_manifest is not None:
print(
"Cargo applies those lint levels to member crates that opt into "
f"`[lints] workspace = true`, for example {example_manifest}.",
file=sys.stderr,
)
print(
"Bazel clippy does not ingest Cargo lint levels automatically, and "
"`clippy.toml` can configure lint behavior but cannot set allow/warn/deny/forbid.",
file=sys.stderr,
)
print(
f"Update {bazelrc_display} so its `build:clippy` "
"`clippy_flag` entries match Cargo.",
file=sys.stderr,
)
if missing:
print(file=sys.stderr)
print("Missing Bazel entries:", file=sys.stderr)
for lint in missing:
print(f" {render_bazelrc_line(lint, cargo_lints[lint])}", file=sys.stderr)
if mismatched:
print(file=sys.stderr)
print("Mismatched lint levels:", file=sys.stderr)
for lint in mismatched:
cargo_level = cargo_lints[lint]
bazel_level = bazel_lints[lint]
print(
f" clippy::{lint}: Cargo has {cargo_level}, Bazel has {bazel_level}",
file=sys.stderr,
)
print(
f" expected: {render_bazelrc_line(lint, cargo_level)}",
file=sys.stderr,
)
if extra:
print(file=sys.stderr)
print("Extra Bazel entries with no Cargo counterpart:", file=sys.stderr)
for lint in extra:
print(f" {render_bazelrc_line(lint, bazel_lints[lint])}", file=sys.stderr)
def render_bazelrc_line(lint: str, level: str) -> str:
return f"{BAZEL_CLIPPY_FLAG_PREFIX}--{level}=clippy::{lint}"
def display_path(path: Path) -> str:
try:
return str(path.relative_to(ROOT))
except ValueError:
return str(path)
def find_workspace_lints_example_manifest() -> str | None:
for cargo_toml in sorted((ROOT / "codex-rs").glob("**/Cargo.toml")):
if cargo_toml == DEFAULT_CARGO_TOML:
continue
data = tomllib.loads(cargo_toml.read_text())
if data.get("lints", {}).get("workspace") is True:
return str(cargo_toml.relative_to(ROOT))
return None
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,125 +0,0 @@
#!/usr/bin/env python3
"""Verify that codex-rs crates inherit workspace metadata, lints, and names.
This keeps `cargo clippy` aligned with the workspace lint policy by ensuring
each crate opts into `[lints] workspace = true`, and it also checks the crate
name conventions for top-level `codex-rs/*` crates and `codex-rs/utils/*`
crates.
"""
from __future__ import annotations
import sys
import tomllib
from pathlib import Path
ROOT = Path(__file__).resolve().parents[2]
CARGO_RS_ROOT = ROOT / "codex-rs"
WORKSPACE_PACKAGE_FIELDS = ("version", "edition", "license")
TOP_LEVEL_NAME_EXCEPTIONS = {
"windows-sandbox-rs": "codex-windows-sandbox",
}
UTILITY_NAME_EXCEPTIONS = {
"path-utils": "codex-utils-path",
}
def main() -> int:
failures = [
(path.relative_to(ROOT), errors)
for path in cargo_manifests()
if (errors := manifest_errors(path))
]
if not failures:
return 0
print(
"Cargo manifests under codex-rs must inherit workspace package metadata and "
"opt into workspace lints."
)
print(
"Cargo only applies `codex-rs/Cargo.toml` `[workspace.lints.clippy]` "
"entries to a crate when that crate declares:"
)
print()
print("[lints]")
print("workspace = true")
print()
print(
"Without that opt-in, `cargo clippy` can miss violations that Bazel clippy "
"catches."
)
print()
print(
"Package-name checks apply to `codex-rs/<crate>/Cargo.toml` and "
"`codex-rs/utils/<crate>/Cargo.toml`."
)
print()
for path, errors in failures:
print(f"{path}:")
for error in errors:
print(f" - {error}")
return 1
def manifest_errors(path: Path) -> list[str]:
manifest = load_manifest(path)
package = manifest.get("package")
if not isinstance(package, dict):
return []
errors = []
for field in WORKSPACE_PACKAGE_FIELDS:
if not is_workspace_reference(package.get(field)):
errors.append(f"set `{field}.workspace = true` in `[package]`")
lints = manifest.get("lints")
if not (isinstance(lints, dict) and lints.get("workspace") is True):
errors.append("add `[lints]` with `workspace = true`")
expected_name = expected_package_name(path)
if expected_name is not None:
actual_name = package.get("name")
if actual_name != expected_name:
errors.append(
f"set `[package].name` to `{expected_name}` (found `{actual_name}`)"
)
return errors
def expected_package_name(path: Path) -> str | None:
parts = path.relative_to(CARGO_RS_ROOT).parts
if len(parts) == 2 and parts[1] == "Cargo.toml":
directory = parts[0]
return TOP_LEVEL_NAME_EXCEPTIONS.get(
directory,
directory if directory.startswith("codex-") else f"codex-{directory}",
)
if len(parts) == 3 and parts[0] == "utils" and parts[2] == "Cargo.toml":
directory = parts[1]
return UTILITY_NAME_EXCEPTIONS.get(directory, f"codex-utils-{directory}")
return None
def is_workspace_reference(value: object) -> bool:
return isinstance(value, dict) and value.get("workspace") is True
def load_manifest(path: Path) -> dict:
return tomllib.loads(path.read_text())
def cargo_manifests() -> list[Path]:
return sorted(
path
for path in CARGO_RS_ROOT.rglob("Cargo.toml")
if path != CARGO_RS_ROOT / "Cargo.toml"
)
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,33 +0,0 @@
# Workflow Strategy
The workflows in this directory are split so that pull requests get fast, review-friendly signal while `main` still gets the full cross-platform verification pass.
## Pull Requests
- `bazel.yml` is the main pre-merge verification path for Rust code.
It runs Bazel `test` and Bazel `clippy` on the supported Bazel targets.
- `rust-ci.yml` keeps the Cargo-native PR checks intentionally small:
- `cargo fmt --check`
- `cargo shear`
- `argument-comment-lint` on Linux, macOS, and Windows
- `tools/argument-comment-lint` package tests when the lint or its workflow wiring changes
The PR workflow still keeps the Linux lint lane on the default-targets-only invocation for now, but the released linter runs on Linux, macOS, and Windows before merge.
## Post-Merge On `main`
- `bazel.yml` also runs on pushes to `main`.
This re-verifies the merged Bazel path and helps keep the BuildBuddy caches warm.
- `rust-ci-full.yml` is the full Cargo-native verification workflow.
It keeps the heavier checks off the PR path while still validating them after merge:
- the full Cargo `clippy` matrix
- the full Cargo `nextest` matrix
- release-profile Cargo builds
- cross-platform `argument-comment-lint`
- Linux remote-env tests
## Rule Of Thumb
- If a build/test/clippy check can be expressed in Bazel, prefer putting the PR-time version in `bazel.yml`.
- Keep `rust-ci.yml` fast enough that it usually does not dominate PR latency.
- Reserve `rust-ci-full.yml` for heavyweight Cargo-native coverage that Bazel does not replace yet.

View File

@@ -1,4 +1,4 @@
name: Bazel
name: Bazel (experimental)
# Note this workflow was originally derived from:
# https://github.com/cerisier/toolchains_llvm_bootstrapped/blob/main/.github/workflows/ci.yaml
@@ -17,7 +17,6 @@ concurrency:
cancel-in-progress: ${{ github.ref_name != 'main' }}
jobs:
test:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
@@ -40,116 +39,183 @@ jobs:
# - os: ubuntu-24.04-arm
# target: aarch64-unknown-linux-gnu
# Windows
- os: windows-latest
target: x86_64-pc-windows-gnullvm
# TODO: Enable Windows once we fix the toolchain issues there.
#- os: windows-latest
# target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
# Configure a human readable name for each job
name: Local Bazel build on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Set up Bazel CI
id: setup_bazel
uses: ./.github/actions/setup-bazel-ci
- name: Set up Node.js for js_repl tests
uses: actions/setup-node@v6
with:
target: ${{ matrix.target }}
install-test-prereqs: "true"
node-version-file: codex-rs/node-version.txt
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- name: Make DotSlash available in PATH (Unix)
if: runner.os != 'Windows'
run: cp "$(which dotslash)" /usr/local/bin
- name: Make DotSlash available in PATH (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: Copy-Item (Get-Command dotslash).Source -Destination "$env:LOCALAPPDATA\Microsoft\WindowsApps\dotslash.exe"
# Install Bazel via Bazelisk
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v3
- name: Check MODULE.bazel.lock is up to date
if: matrix.os == 'ubuntu-24.04' && matrix.target == 'x86_64-unknown-linux-gnu'
shell: bash
run: ./scripts/check-module-bazel-lock.sh
# TODO(mbolin): Bring this back once we have caching working. Currently,
# we never seem to get a cache hit but we still end up paying the cost of
# uploading at the end of the build, which takes over a minute!
#
# Cache build and external artifacts so that the next ci build is incremental.
# Because github action caches cannot be updated after a build, we need to
# store the contents of each build in a unique cache key, then fall back to loading
# it on the next ci run. We use hashFiles(...) in the key and restore-keys- with
# the prefix to load the most recent cache for the branch on a cache miss. You
# should customize the contents of hashFiles to capture any bazel input sources,
# although this doesn't need to be perfect. If none of the input sources change
# then a cache hit will load an existing cache and bazel won't have to do any work.
# In the case of a cache miss, you want the fallback cache to contain most of the
# previously built artifacts to minimize build time. The more precise you are with
# hashFiles sources the less work bazel will have to do.
# - name: Mount bazel caches
# uses: actions/cache@v5
# with:
# path: |
# ~/.cache/bazel-repo-cache
# ~/.cache/bazel-repo-contents-cache
# key: bazel-cache-${{ matrix.os }}-${{ hashFiles('**/BUILD.bazel', '**/*.bzl', 'MODULE.bazel') }}
# restore-keys: |
# bazel-cache-${{ matrix.os }}
- name: Configure Bazel startup args (Windows)
if: runner.os == 'Windows'
shell: pwsh
run: |
# Use a very short path to reduce argv/path length issues.
"BAZEL_STARTUP_ARGS=--output_user_root=C:\" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: bazel test //...
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
bazel_targets=(
set -o pipefail
bazel_console_log="$(mktemp)"
print_failed_bazel_test_logs() {
local console_log="$1"
local testlogs_dir
testlogs_dir="$(bazel $BAZEL_STARTUP_ARGS info bazel-testlogs 2>/dev/null || echo bazel-testlogs)"
local failed_targets=()
while IFS= read -r target; do
failed_targets+=("$target")
done < <(
grep -E '^FAIL: //' "$console_log" \
| sed -E 's#^FAIL: (//[^ ]+).*#\1#' \
| sort -u
)
if [[ ${#failed_targets[@]} -eq 0 ]]; then
echo "No failed Bazel test targets were found in console output."
return
fi
for target in "${failed_targets[@]}"; do
local rel_path="${target#//}"
rel_path="${rel_path/:/\/}"
local test_log="${testlogs_dir}/${rel_path}/test.log"
echo "::group::Bazel test log tail for ${target}"
if [[ -f "$test_log" ]]; then
tail -n 200 "$test_log"
else
echo "Missing test log: $test_log"
fi
echo "::endgroup::"
done
}
bazel_args=(
test
//...
# Keep standalone V8 library targets out of the ordinary Bazel CI
# path. V8 consumers under `//codex-rs/...` still participate
# transitively through `//...`.
-//third_party/v8:all
--test_verbose_timeout_warnings
--build_metadata=REPO_URL=https://github.com/openai/codex.git
--build_metadata=COMMIT_SHA=$(git rev-parse HEAD)
--build_metadata=ROLE=CI
--build_metadata=VISIBILITY=PUBLIC
)
./.github/scripts/run-bazel-ci.sh \
--print-failed-test-logs \
--use-node-test-env \
-- \
test \
--test_tag_filters=-argument-comment-lint \
--test_verbose_timeout_warnings \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
"${bazel_targets[@]}"
if [[ "${RUNNER_OS:-}" != "Windows" ]]; then
# Bazel test sandboxes on macOS may resolve an older Homebrew `node`
# before the `actions/setup-node` runtime on PATH.
node_bin="$(which node)"
bazel_args+=("--test_env=CODEX_JS_REPL_NODE_PATH=${node_bin}")
fi
# Save bazel repository cache explicitly; make non-fatal so cache uploading
# never fails the overall job. Only save when key wasn't hit.
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-${{ matrix.target }}-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}
if [[ -n "${BUILDBUDDY_API_KEY:-}" ]]; then
echo "BuildBuddy API key is available; using remote Bazel configuration."
# Work around Bazel 9 remote repo contents cache / overlay materialization failures
# seen in CI (for example "is not a symlink" or permission errors while
# materializing external repos such as rules_perl). We still use BuildBuddy for
# remote execution/cache; this only disables the startup-level repo contents cache.
set +e
bazel $BAZEL_STARTUP_ARGS \
--noexperimental_remote_repo_contents_cache \
--bazelrc=.github/workflows/ci.bazelrc \
"${bazel_args[@]}" \
"--remote_header=x-buildbuddy-api-key=$BUILDBUDDY_API_KEY" \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
else
echo "BuildBuddy API key is not available; using local Bazel configuration."
# Keep fork/community PRs on Bazel but disable remote services that are
# configured in .bazelrc and require auth.
#
# Flag docs:
# - Command-line reference: https://bazel.build/reference/command-line-reference
# - Remote caching overview: https://bazel.build/remote/caching
# - Remote execution overview: https://bazel.build/remote/rbe
# - Build Event Protocol overview: https://bazel.build/remote/bep
#
# --noexperimental_remote_repo_contents_cache:
# disable remote repo contents cache enabled in .bazelrc startup options.
# https://bazel.build/reference/command-line-reference#startup_options-flag--experimental_remote_repo_contents_cache
# --remote_cache= and --remote_executor=:
# clear remote cache/execution endpoints configured in .bazelrc.
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_cache
# https://bazel.build/reference/command-line-reference#common_options-flag--remote_executor
set +e
bazel $BAZEL_STARTUP_ARGS \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
--remote_cache= \
--remote_executor= \
2>&1 | tee "$bazel_console_log"
bazel_status=${PIPESTATUS[0]}
set -e
fi
clippy:
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
# Keep Linux lint coverage on x64 and add the arm64 macOS path that
# the Bazel test job already exercises. Add Windows gnullvm as well
# so PRs get Bazel-native lint signal on the same Windows toolchain
# that the Bazel test job uses.
- os: ubuntu-24.04
target: x86_64-unknown-linux-gnu
- os: macos-15-xlarge
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-gnullvm
runs-on: ${{ matrix.os }}
name: Bazel clippy on ${{ matrix.os }} for ${{ matrix.target }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel CI
id: setup_bazel
uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ matrix.target }}
- name: bazel build --config=clippy //codex-rs/...
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
# Keep the initial Bazel clippy scope on codex-rs and out of the
# V8 proof-of-concept target for now.
./.github/scripts/run-bazel-ci.sh \
-- \
build \
--config=clippy \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
--build_metadata=TAG_job=clippy \
-- \
//codex-rs/... \
-//codex-rs/v8-poc:all
# Save bazel repository cache explicitly; make non-fatal so cache uploading
# never fails the overall job. Only save when key wasn't hit.
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-${{ matrix.target }}-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}
if [[ ${bazel_status:-0} -ne 0 ]]; then
print_failed_bazel_test_logs "$bazel_console_log"
exit "$bazel_status"
fi

View File

@@ -1,32 +0,0 @@
name: blob-size-policy
on:
pull_request: {}
jobs:
check:
name: Blob size policy
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
with:
fetch-depth: 0
- name: Determine PR comparison range
id: range
shell: bash
run: |
set -euo pipefail
echo "base=$(git rev-parse HEAD^1)" >> "$GITHUB_OUTPUT"
echo "head=$(git rev-parse HEAD^2)" >> "$GITHUB_OUTPUT"
- name: Check changed blob sizes
env:
BASE_SHA: ${{ steps.range.outputs.base }}
HEAD_SHA: ${{ steps.range.outputs.head }}
run: |
python3 scripts/check_blob_size.py \
--base "$BASE_SHA" \
--head "$HEAD_SHA" \
--max-bytes 512000 \
--allowlist .github/blob-size-allowlist.txt

View File

@@ -14,13 +14,13 @@ jobs:
working-directory: ./codex-rs
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
uses: actions/checkout@v6
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
uses: dtolnay/rust-toolchain@stable
- name: Run cargo-deny
uses: EmbarkStudios/cargo-deny-action@82eb9f621fbc699dd0918f3ea06864c14cc84246 # v2
uses: EmbarkStudios/cargo-deny-action@v2
with:
rust-version: stable
manifest-path: ./codex-rs/Cargo.toml

27
.github/workflows/ci.bazelrc vendored Normal file
View File

@@ -0,0 +1,27 @@
common --remote_download_minimal
common --keep_going
common --verbose_failures
# Disable disk cache since we have remote one and aren't using persistent workers.
common --disk_cache=
# Rearrange caches on Windows so they're on the same volume as the checkout.
common:windows --repo_contents_cache=D:/a/.cache/bazel-repo-contents-cache
common:windows --repository_cache=D:/a/.cache/bazel-repo-cache
# We prefer to run the build actions entirely remotely so we can dial up the concurrency.
# We have platform-specific tests, so we want to execute the tests on all platforms using the strongest sandboxing available on each platform.
# On linux, we can do a full remote build/test, by targeting the right (x86/arm) runners, so we have coverage of both.
# Linux crossbuilds don't work until we untangle the libc constraint mess.
common:linux --config=remote
common:linux --strategy=remote
common:linux --platforms=//:rbe
# On mac, we can run all the build actions remotely but test actions locally.
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
# On windows we cannot cross-build the tests but run them locally due to what appears to be a Bazel bug
# (windows vs unix path confusion)

View File

@@ -12,21 +12,15 @@ jobs:
NODE_OPTIONS: --max-old-space-size=4096
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Verify codex-rs Cargo manifests inherit workspace settings
run: python3 .github/scripts/verify_cargo_workspace_manifests.py
- name: Verify Bazel clippy flags match Cargo workspace lints
run: python3 .github/scripts/verify_bazel_clippy_lints.py
uses: actions/checkout@v6
- name: Setup pnpm
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
uses: actions/setup-node@v6
with:
node-version: 22
@@ -34,7 +28,7 @@ jobs:
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- uses: facebook/install-dotslash@v2
- name: Stage npm package
id: stage_npm_package
@@ -43,7 +37,7 @@ jobs:
run: |
set -euo pipefail
# Use a rust-release version that includes all native binaries.
CODEX_VERSION=0.115.0
CODEX_VERSION=0.74.0
OUTPUT_DIR="${RUNNER_TEMP}"
python3 ./scripts/stage_npm_packages.py \
--release-version "$CODEX_VERSION" \
@@ -53,7 +47,7 @@ jobs:
echo "pack_output=$PACK_OUTPUT" >> "$GITHUB_OUTPUT"
- name: Upload staged npm package artifact
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
uses: actions/upload-artifact@v7
with:
name: codex-npm-staging
path: ${{ steps.stage_npm_package.outputs.pack_output }}

View File

@@ -18,7 +18,7 @@ jobs:
if: ${{ github.repository_owner == 'openai' }}
runs-on: ubuntu-latest
steps:
- uses: contributor-assistant/github-action@ca4a40a7d1004f18d9960b404b97e5f30a505a08 # v2.6.1
- uses: contributor-assistant/github-action@v2.6.1
# Run on close only if the PR was merged. This will lock the PR to preserve
# the CLA agreement. We don't want to lock PRs that have been closed without
# merging because the contributor may want to respond with additional comments.

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Close inactive PRs from contributors
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
uses: actions/github-script@v8
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |

View File

@@ -18,7 +18,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
uses: actions/checkout@v6
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@b80729f885d32f78a716c2f107b4db1025001c42 # v1
- name: Codespell

View File

@@ -19,7 +19,7 @@ jobs:
reason: ${{ steps.normalize-all.outputs.reason }}
has_matches: ${{ steps.normalize-all.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Prepare Codex inputs
env:
@@ -61,7 +61,7 @@ jobs:
# .github/prompts/issue-deduplicator.txt file is obsolete and removed.
- id: codex-all
name: Find duplicates (pass 1, all issues)
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
uses: openai/codex-action@main
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
@@ -155,7 +155,7 @@ jobs:
reason: ${{ steps.normalize-open.outputs.reason }}
has_matches: ${{ steps.normalize-open.outputs.has_matches }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Prepare Codex inputs
env:
@@ -195,7 +195,7 @@ jobs:
- id: codex-open
name: Find duplicates (pass 2, open issues)
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
uses: openai/codex-action@main
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"
@@ -342,7 +342,7 @@ jobs:
issues: write
steps:
- name: Comment on issue
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
uses: actions/github-script@v8
env:
CODEX_OUTPUT: ${{ needs.select-final.outputs.codex_output }}
with:
@@ -396,7 +396,6 @@ jobs:
env:
GH_TOKEN: ${{ github.token }}
GH_REPO: ${{ github.repository }}
ISSUE_NUMBER: ${{ github.event.issue.number }}
run: |
gh issue edit "$ISSUE_NUMBER" --remove-label codex-deduplicate || true
gh issue edit "${{ github.event.issue.number }}" --remove-label codex-deduplicate || true
echo "Attempted to remove label: codex-deduplicate"

View File

@@ -17,10 +17,10 @@ jobs:
outputs:
codex_output: ${{ steps.codex.outputs.final-message }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- id: codex
uses: openai/codex-action@0b91f4a2703c23df3102c3f0967d3c6db34eedef # v1
uses: openai/codex-action@main
with:
openai-api-key: ${{ secrets.CODEX_OPENAI_API_KEY }}
allow-users: "*"

View File

@@ -1,775 +0,0 @@
name: rust-ci-full
on:
push:
branches:
- main
workflow_dispatch:
# CI builds in debug (dev) for faster signal.
jobs:
# --- CI that doesn't need specific targets ---------------------------------
general:
name: Format / etc
runs-on: ubuntu-24.04
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
components: rustfmt
- name: cargo fmt
run: cargo fmt -- --config imports_granularity=Item --check
cargo_shear:
name: cargo shear
runs-on: ubuntu-24.04
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
version: 1.5.1
- name: cargo shear
run: cargo shear
argument_comment_lint_package:
name: Argument comment lint package
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
components: llvm-tools-preview, rustc-dev, rust-src
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
~/.cargo/bin/dylint-link
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: argument-comment-lint-${{ runner.os }}-${{ hashFiles('tools/argument-comment-lint/Cargo.lock', 'tools/argument-comment-lint/rust-toolchain', '.github/workflows/rust-ci.yml', '.github/workflows/rust-ci-full.yml') }}
- name: Install cargo-dylint tooling
if: ${{ steps.cargo_dylint_cache.outputs.cache-hit != 'true' }}
run: cargo install --locked cargo-dylint dylint-link
- name: Check Python wrapper syntax
run: python3 -m py_compile tools/argument-comment-lint/wrapper_common.py tools/argument-comment-lint/run.py tools/argument-comment-lint/run-prebuilt-linter.py tools/argument-comment-lint/test_wrapper_common.py
- name: Test Python wrapper helpers
run: python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'
- name: Test argument comment lint package
working-directory: tools/argument-comment-lint
run: cargo test
argument_comment_lint_prebuilt:
name: Argument comment lint - ${{ matrix.name }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- name: Linux
runner: ubuntu-24.04
- name: macOS
runner: macos-15-xlarge
- name: Windows
runner: windows-x64
runs_on:
group: codex-runners
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
install-test-prereqs: true
- name: Install Linux sandbox build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
sudo DEBIAN_FRONTEND=noninteractive apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os != 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
bazel_targets="$(./tools/argument-comment-lint/list-bazel-targets.sh)"
./.github/scripts/run-bazel-ci.sh \
-- \
build \
--config=argument-comment-lint \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
${bazel_targets}
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os == 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
./.github/scripts/run-argument-comment-lint-bazel.sh \
--config=argument-comment-lint \
--platforms=//:local_windows \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA}
# --- CI to validate on different os/targets --------------------------------
lint_build:
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
# In rust-ci, representative release-profile checks use thin LTO for faster feedback.
CARGO_PROFILE_RELEASE_LTO: ${{ matrix.profile == 'release' && 'thin' || 'fat' }}
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: macos-15-xlarge
target: x86_64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
packages=(pkg-config libcap-dev)
if [[ "${{ matrix.target }}" == 'x86_64-unknown-linux-musl' || "${{ matrix.target }}" == 'aarch64-unknown-linux-musl' ]]; then
packages+=(libubsan1)
fi
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends "${packages[@]}"
fi
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
components: clippy
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
# Explicit cache restore: split cargo home vs target, so we can
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
# Install and restore sccache cache
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Disable sccache wrapper (musl)
shell: bash
run: |
set -euo pipefail
echo "RUSTC_WRAPPER=" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Prepare APT cache directories (musl)
shell: bash
run: |
set -euo pipefail
sudo mkdir -p /var/cache/apt/archives /var/lib/apt/lists
sudo chown -R "$USER:$USER" /var/cache/apt /var/lib/apt/lists
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Restore APT cache (musl)
id: cache_apt_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
name: Configure musl rusty_v8 artifact overrides
env:
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
version="$(python3 "${GITHUB_WORKSPACE}/.github/scripts/rusty_v8_bazel.py" resolved-v8-crate-version)"
release_tag="rusty-v8-v${version}"
base_url="https://github.com/openai/codex/releases/download/${release_tag}"
archive="https://github.com/openai/codex/releases/download/rusty-v8-v${version}/librusty_v8_release_${TARGET}.a.gz"
binding_dir="${RUNNER_TEMP}/rusty_v8"
binding_path="${binding_dir}/src_binding_release_${TARGET}.rs"
mkdir -p "${binding_dir}"
curl -fsSL "${base_url}/src_binding_release_${TARGET}.rs" -o "${binding_path}"
echo "RUSTY_V8_ARCHIVE=${archive}" >> "$GITHUB_ENV"
echo "RUSTY_V8_SRC_BINDING_PATH=${binding_path}" >> "$GITHUB_ENV"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
- name: Pre-warm dependency cache (cargo-chef)
if: ${{ matrix.profile == 'release' }}
shell: bash
run: |
set -euo pipefail
RECIPE="${RUNNER_TEMP}/chef-recipe.json"
cargo chef prepare --recipe-path "$RECIPE"
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release --all-features
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} --timings -- -D warnings
- name: Upload Cargo timings (clippy)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-clippy-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
# Save caches explicitly; make non-fatal so cache packaging
# never fails the overall job. Only save when key wasn't hit.
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (${{ matrix.profile }})";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Save APT cache (musl)
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
tests:
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.remote_env == 'true' && ' (remote)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
# Perhaps we can bring this back down to 30m once we finish the cutover
# from tui_app_server/ to tui/. Incidentally, windows-arm64 was the main
# offender for exceeding the timeout.
timeout-minutes: 45
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects, except on
# arm64 macOS runners cross-targeting x86_64 where ring/cc-rs can produce
# mixed-architecture archives under sccache.
USE_SCCACHE: ${{ (startsWith(matrix.runner, 'windows') || (matrix.runner == 'macos-15-xlarge' && matrix.target == 'x86_64-apple-darwin')) && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
remote_env: "true"
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Node.js for js_repl tests
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
with:
node-version-file: codex-rs/node-version.txt
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
fi
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
targets: ${{ matrix.target }}
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
- name: Enable unprivileged user namespaces (Linux)
if: runner.os == 'Linux'
run: |
# Required for bubblewrap to work on Linux CI runners.
sudo sysctl -w kernel.unprivileged_userns_clone=1
# Ubuntu 24.04+ can additionally gate unprivileged user namespaces
# behind AppArmor.
if sudo sysctl -a 2>/dev/null | grep -q '^kernel.apparmor_restrict_unprivileged_userns'; then
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
fi
- name: Set up remote test env (Docker)
if: ${{ runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set -euo pipefail
export CODEX_TEST_REMOTE_ENV_CONTAINER_NAME=codex-remote-test-env
source "${GITHUB_WORKSPACE}/scripts/test-remote-env.sh"
echo "CODEX_TEST_REMOTE_ENV=${CODEX_TEST_REMOTE_ENV}" >> "$GITHUB_ENV"
- name: tests
id: test
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test --timings
env:
RUST_BACKTRACE: 1
NEXTEST_STATUS_LEVEL: leak
- name: Upload Cargo timings (nextest)
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: cargo-timings-rust-ci-nextest-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (tests)";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Tear down remote test env
if: ${{ always() && runner.os == 'Linux' && matrix.remote_env == 'true' }}
shell: bash
run: |
set +e
if [[ "${{ steps.test.outcome }}" != "success" ]]; then
docker logs codex-remote-test-env || true
fi
docker rm -f codex-remote-test-env >/dev/null 2>&1 || true
- name: verify tests passed
if: steps.test.outcome == 'failure'
run: |
echo "Tests failed. See logs for details."
exit 1
# --- Gatherer job for the full post-merge workflow --------------------------
results:
name: Full CI results
needs:
[
general,
cargo_shear,
argument_comment_lint_package,
argument_comment_lint_prebuilt,
lint_build,
tests,
]
if: always()
runs-on: ubuntu-24.04
steps:
- name: Summarize
shell: bash
run: |
echo "argpkg : ${{ needs.argument_comment_lint_package.result }}"
echo "arglint: ${{ needs.argument_comment_lint_prebuilt.result }}"
echo "general: ${{ needs.general.result }}"
echo "shear : ${{ needs.cargo_shear.result }}"
echo "lint : ${{ needs.lint_build.result }}"
echo "tests : ${{ needs.tests.result }}"
[[ '${{ needs.argument_comment_lint_package.result }}' == 'success' ]] || { echo 'argument_comment_lint_package failed'; exit 1; }
[[ '${{ needs.argument_comment_lint_prebuilt.result }}' == 'success' ]] || { echo 'argument_comment_lint_prebuilt failed'; exit 1; }
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
[[ '${{ needs.lint_build.result }}' == 'success' ]] || { echo 'lint_build failed'; exit 1; }
[[ '${{ needs.tests.result }}' == 'success' ]] || { echo 'tests failed'; exit 1; }
- name: sccache summary note
if: always()
run: |
echo "Per-job sccache stats are attached to each matrix job's Step Summary."

View File

@@ -1,20 +1,23 @@
name: rust-ci
on:
pull_request: {}
push:
branches:
- main
workflow_dispatch:
# CI builds in debug (dev) for faster signal.
jobs:
# --- Detect what changed so the fast PR workflow only runs relevant jobs ----
# --- Detect what changed to detect which tests to run (always runs) -------------------------------------
changed:
name: Detect changed areas
runs-on: ubuntu-24.04
outputs:
argument_comment_lint: ${{ steps.detect.outputs.argument_comment_lint }}
argument_comment_lint_package: ${{ steps.detect.outputs.argument_comment_lint_package }}
codex: ${{ steps.detect.outputs.codex }}
workflows: ${{ steps.detect.outputs.workflows }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Detect changed paths (no external action)
@@ -28,40 +31,35 @@ jobs:
HEAD_SHA='${{ github.event.pull_request.head.sha }}'
echo "Base SHA: $BASE_SHA"
echo "Head SHA: $HEAD_SHA"
# List files changed between base and PR head
mapfile -t files < <(git diff --name-only --no-renames "$BASE_SHA" "$HEAD_SHA")
else
# On manual runs, default to the full fast-PR bundle.
files=("codex-rs/force" "tools/argument-comment-lint/force" ".github/force")
# On push / manual runs, default to running everything
files=("codex-rs/force" ".github/force")
fi
codex=false
argument_comment_lint=false
argument_comment_lint_package=false
workflows=false
for f in "${files[@]}"; do
[[ $f == codex-rs/* ]] && codex=true
[[ $f == codex-rs/* || $f == tools/argument-comment-lint/* || $f == justfile ]] && argument_comment_lint=true
[[ $f == tools/argument-comment-lint/* || $f == .github/workflows/rust-ci.yml || $f == .github/workflows/rust-ci-full.yml ]] && argument_comment_lint_package=true
[[ $f == .github/* ]] && workflows=true
done
echo "argument_comment_lint=$argument_comment_lint" >> "$GITHUB_OUTPUT"
echo "argument_comment_lint_package=$argument_comment_lint_package" >> "$GITHUB_OUTPUT"
echo "codex=$codex" >> "$GITHUB_OUTPUT"
echo "workflows=$workflows" >> "$GITHUB_OUTPUT"
# --- Fast Cargo-native PR checks -------------------------------------------
# --- CI that doesn't need specific targets ---------------------------------
general:
name: Format / etc
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' }}
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.93.0
with:
components: rustfmt
- name: cargo fmt
@@ -71,13 +69,13 @@ jobs:
name: cargo shear
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' }}
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.93.0
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-shear
@@ -85,145 +83,606 @@ jobs:
- name: cargo shear
run: cargo shear
argument_comment_lint_package:
name: Argument comment lint package
runs-on: ubuntu-24.04
needs: changed
if: ${{ needs.changed.outputs.argument_comment_lint_package == 'true' }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- name: Install nightly argument-comment-lint toolchain
shell: bash
run: |
rustup toolchain install nightly-2025-09-18 \
--profile minimal \
--component llvm-tools-preview \
--component rustc-dev \
--component rust-src \
--no-self-update
rustup default nightly-2025-09-18
- name: Cache cargo-dylint tooling
id: cargo_dylint_cache
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cargo/bin/cargo-dylint
~/.cargo/bin/dylint-link
~/.cargo/registry/index
~/.cargo/registry/cache
~/.cargo/git/db
key: argument-comment-lint-${{ runner.os }}-${{ hashFiles('tools/argument-comment-lint/Cargo.lock', 'tools/argument-comment-lint/rust-toolchain', '.github/workflows/rust-ci.yml', '.github/workflows/rust-ci-full.yml') }}
- name: Install cargo-dylint tooling
if: ${{ steps.cargo_dylint_cache.outputs.cache-hit != 'true' }}
run: cargo install --locked cargo-dylint dylint-link
- name: Check Python wrapper syntax
run: python3 -m py_compile tools/argument-comment-lint/wrapper_common.py tools/argument-comment-lint/run.py tools/argument-comment-lint/run-prebuilt-linter.py tools/argument-comment-lint/test_wrapper_common.py
- name: Test Python wrapper helpers
run: python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'
- name: Test argument comment lint package
working-directory: tools/argument-comment-lint
run: cargo test
argument_comment_lint_prebuilt:
name: Argument comment lint - ${{ matrix.name }}
# --- CI to validate on different os/targets --------------------------------
lint_build:
name: Lint/Build — ${{ matrix.runner }} - ${{ matrix.target }}${{ matrix.profile == 'release' && ' (release)' || '' }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: ${{ matrix.timeout_minutes }}
timeout-minutes: 30
needs: changed
if: ${{ needs.changed.outputs.argument_comment_lint == 'true' || needs.changed.outputs.workflows == 'true' }}
# Keep job-level if to avoid spinning up runners when not needed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
# In rust-ci, representative release-profile checks use thin LTO for faster feedback.
CARGO_PROFILE_RELEASE_LTO: ${{ matrix.profile == 'release' && 'thin' || 'fat' }}
strategy:
fail-fast: false
matrix:
include:
- name: Linux
runner: ubuntu-24.04
timeout_minutes: 30
- name: macOS
runner: macos-15-xlarge
timeout_minutes: 30
- name: Windows
runner: windows-x64
timeout_minutes: 30
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: macos-15-xlarge
target: x86_64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
# Also run representative release builds on Mac and Linux because
# there could be release-only build errors we want to catch.
# Hopefully this also pre-populates the build cache to speed up
# releases.
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: release
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
profile: release
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: release
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: ./.github/actions/setup-bazel-ci
with:
target: ${{ runner.os }}
install-test-prereqs: true
- name: Install Linux sandbox build dependencies
- uses: actions/checkout@v6
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
sudo DEBIAN_FRONTEND=noninteractive apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os != 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
packages=(pkg-config libcap-dev)
if [[ "${{ matrix.target }}" == 'x86_64-unknown-linux-musl' || "${{ matrix.target }}" == 'aarch64-unknown-linux-musl' ]]; then
packages+=(libubsan1)
fi
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends "${packages[@]}"
fi
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
components: clippy
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Use hermetic Cargo home (musl)
shell: bash
run: |
bazel_targets="$(./tools/argument-comment-lint/list-bazel-targets.sh)"
./.github/scripts/run-bazel-ci.sh \
-- \
build \
--config=argument-comment-lint \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
-- \
${bazel_targets}
- name: Run argument comment lint on codex-rs via Bazel
if: ${{ runner.os == 'Windows' }}
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
set -euo pipefail
cargo_home="${GITHUB_WORKSPACE}/.cargo-home"
mkdir -p "${cargo_home}/bin"
echo "CARGO_HOME=${cargo_home}" >> "$GITHUB_ENV"
echo "${cargo_home}/bin" >> "$GITHUB_PATH"
: > "${cargo_home}/config.toml"
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
./.github/scripts/run-argument-comment-lint-bazel.sh \
--config=argument-comment-lint \
--platforms=//:local_windows \
--keep_going \
--build_metadata=COMMIT_SHA=${GITHUB_SHA}
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
# Explicit cache restore: split cargo home vs target, so we can
# avoid caching the large target dir on the gnu-dev job.
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
# Install and restore sccache cache
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Disable sccache wrapper (musl)
shell: bash
run: |
set -euo pipefail
echo "RUSTC_WRAPPER=" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Prepare APT cache directories (musl)
shell: bash
run: |
set -euo pipefail
sudo mkdir -p /var/cache/apt/archives /var/lib/apt/lists
sudo chown -R "$USER:$USER" /var/cache/apt /var/lib/apt/lists
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Restore APT cache (musl)
id: cache_apt_restore
uses: actions/cache/restore@v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install musl build tools
env:
DEBIAN_FRONTEND: noninteractive
TARGET: ${{ matrix.target }}
APT_UPDATE_ARGS: -o Acquire::Retries=3
APT_INSTALL_ARGS: --no-install-recommends
shell: bash
run: bash "${GITHUB_WORKSPACE}/.github/scripts/install-musl-build-tools.sh"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Configure rustc UBSan wrapper (musl host)
shell: bash
run: |
set -euo pipefail
ubsan=""
if command -v ldconfig >/dev/null 2>&1; then
ubsan="$(ldconfig -p | grep -m1 'libubsan\.so\.1' | sed -E 's/.*=> (.*)$/\1/')"
fi
wrapper_root="${RUNNER_TEMP:-/tmp}"
wrapper="${wrapper_root}/rustc-ubsan-wrapper"
cat > "${wrapper}" <<EOF
#!/usr/bin/env bash
set -euo pipefail
if [[ -n "${ubsan}" ]]; then
export LD_PRELOAD="${ubsan}\${LD_PRELOAD:+:\${LD_PRELOAD}}"
fi
exec "\$1" "\${@:2}"
EOF
chmod +x "${wrapper}"
echo "RUSTC_WRAPPER=${wrapper}" >> "$GITHUB_ENV"
echo "RUSTC_WORKSPACE_WRAPPER=" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Clear sanitizer flags (musl)
shell: bash
run: |
set -euo pipefail
# Clear global Rust flags so host/proc-macro builds don't pull in UBSan.
echo "RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_ENCODED_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "RUSTDOCFLAGS=" >> "$GITHUB_ENV"
# Override any runner-level Cargo config rustflags as well.
echo "CARGO_BUILD_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
echo "CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_RUSTFLAGS=" >> "$GITHUB_ENV"
sanitize_flags() {
local input="$1"
input="${input//-fsanitize=undefined/}"
input="${input//-fno-sanitize-recover=undefined/}"
input="${input//-fno-sanitize-trap=undefined/}"
echo "$input"
}
cflags="$(sanitize_flags "${CFLAGS-}")"
cxxflags="$(sanitize_flags "${CXXFLAGS-}")"
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- name: Install cargo-chef
if: ${{ matrix.profile == 'release' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: cargo-chef
version: 0.1.71
- name: Pre-warm dependency cache (cargo-chef)
if: ${{ matrix.profile == 'release' }}
shell: bash
run: |
set -euo pipefail
RECIPE="${RUNNER_TEMP}/chef-recipe.json"
cargo chef prepare --recipe-path "$RECIPE"
cargo chef cook --recipe-path "$RECIPE" --target ${{ matrix.target }} --release --all-features
- name: cargo clippy
run: cargo clippy --target ${{ matrix.target }} --all-features --tests --profile ${{ matrix.profile }} --timings -- -D warnings
- name: Upload Cargo timings (clippy)
if: always()
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-ci-clippy-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
# Save caches explicitly; make non-fatal so cache packaging
# never fails the overall job. Only save when key wasn't hit.
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
${{ github.workspace }}/.cargo-home/bin/
${{ github.workspace }}/.cargo-home/registry/index/
${{ github.workspace }}/.cargo-home/registry/cache/
${{ github.workspace }}/.cargo-home/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (${{ matrix.profile }})";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: Save APT cache (musl)
if: always() && !cancelled() && (matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl') && steps.cache_apt_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
/var/cache/apt
key: apt-${{ matrix.runner }}-${{ matrix.target }}-v1
tests:
name: Tests — ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 30
needs: changed
if: ${{ needs.changed.outputs.codex == 'true' || needs.changed.outputs.workflows == 'true' || github.event_name == 'push' }}
defaults:
run:
working-directory: codex-rs
env:
# Speed up repeated builds across CI runs by caching compiled objects (non-Windows).
USE_SCCACHE: ${{ startsWith(matrix.runner, 'windows') && 'false' || 'true' }}
CARGO_INCREMENTAL: "0"
SCCACHE_CACHE_SIZE: 10G
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
profile: dev
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-x64
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
profile: dev
runs_on:
group: codex-runners
labels: codex-linux-arm64
- runner: windows-x64
target: x86_64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-x64
- runner: windows-arm64
target: aarch64-pc-windows-msvc
profile: dev
runs_on:
group: codex-runners
labels: codex-windows-arm64
steps:
- uses: actions/checkout@v6
- name: Set up Node.js for js_repl tests
uses: actions/setup-node@v6
with:
node-version-file: codex-rs/node-version.txt
- name: Install Linux build dependencies
if: ${{ runner.os == 'Linux' }}
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
fi
# Some integration tests rely on DotSlash being installed.
# See https://github.com/openai/codex/pull/7617.
- name: Install DotSlash
uses: facebook/install-dotslash@v2
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
- name: Compute lockfile hash
id: lockhash
working-directory: codex-rs
shell: bash
run: |
set -euo pipefail
echo "hash=$(sha256sum Cargo.lock | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
echo "toolchain_hash=$(sha256sum rust-toolchain.toml | cut -d' ' -f1)" >> "$GITHUB_OUTPUT"
- name: Restore cargo home cache
id: cache_cargo_home_restore
uses: actions/cache/restore@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
restore-keys: |
cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- name: Install sccache
if: ${{ env.USE_SCCACHE == 'true' }}
uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: sccache
version: 0.7.5
- name: Configure sccache backend
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: |
set -euo pipefail
if [[ -n "${ACTIONS_CACHE_URL:-}" && -n "${ACTIONS_RUNTIME_TOKEN:-}" ]]; then
echo "SCCACHE_GHA_ENABLED=true" >> "$GITHUB_ENV"
echo "Using sccache GitHub backend"
else
echo "SCCACHE_GHA_ENABLED=false" >> "$GITHUB_ENV"
echo "SCCACHE_DIR=${{ github.workspace }}/.sccache" >> "$GITHUB_ENV"
echo "Using sccache local disk + actions/cache fallback"
fi
- name: Enable sccache wrapper
if: ${{ env.USE_SCCACHE == 'true' }}
shell: bash
run: echo "RUSTC_WRAPPER=sccache" >> "$GITHUB_ENV"
- name: Restore sccache cache (fallback)
if: ${{ env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true' }}
id: cache_sccache_restore
uses: actions/cache/restore@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
restore-keys: |
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-
sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-
- uses: taiki-e/install-action@44c6d64aa62cd779e873306675c7a58e86d6d532 # v2
with:
tool: nextest
version: 0.9.103
- name: Enable unprivileged user namespaces (Linux)
if: runner.os == 'Linux'
run: |
# Required for bubblewrap to work on Linux CI runners.
sudo sysctl -w kernel.unprivileged_userns_clone=1
# Ubuntu 24.04+ can additionally gate unprivileged user namespaces
# behind AppArmor.
if sudo sysctl -a 2>/dev/null | grep -q '^kernel.apparmor_restrict_unprivileged_userns'; then
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
fi
- name: tests
id: test
run: cargo nextest run --all-features --no-fail-fast --target ${{ matrix.target }} --cargo-profile ci-test --timings
env:
RUST_BACKTRACE: 1
NEXTEST_STATUS_LEVEL: leak
- name: Upload Cargo timings (nextest)
if: always()
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-ci-nextest-${{ matrix.target }}-${{ matrix.profile }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
if-no-files-found: warn
- name: Save cargo home cache
if: always() && !cancelled() && steps.cache_cargo_home_restore.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
key: cargo-home-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ steps.lockhash.outputs.toolchain_hash }}
- name: Save sccache cache (fallback)
if: always() && !cancelled() && env.USE_SCCACHE == 'true' && env.SCCACHE_GHA_ENABLED != 'true'
continue-on-error: true
uses: actions/cache/save@v5
with:
path: ${{ github.workspace }}/.sccache/
key: sccache-${{ matrix.runner }}-${{ matrix.target }}-${{ matrix.profile }}-${{ steps.lockhash.outputs.hash }}-${{ github.run_id }}
- name: sccache stats
if: always() && env.USE_SCCACHE == 'true'
continue-on-error: true
run: sccache --show-stats || true
- name: sccache summary
if: always() && env.USE_SCCACHE == 'true'
shell: bash
run: |
{
echo "### sccache stats — ${{ matrix.target }} (tests)";
echo;
echo '```';
sccache --show-stats || true;
echo '```';
} >> "$GITHUB_STEP_SUMMARY"
- name: verify tests passed
if: steps.test.outcome == 'failure'
run: |
echo "Tests failed. See logs for details."
exit 1
# --- Gatherer job that you mark as the ONLY required status -----------------
results:
name: CI results (required)
needs:
[
changed,
general,
cargo_shear,
argument_comment_lint_package,
argument_comment_lint_prebuilt,
]
needs: [changed, general, cargo_shear, lint_build, tests]
if: always()
runs-on: ubuntu-24.04
steps:
- name: Summarize
shell: bash
run: |
echo "argpkg : ${{ needs.argument_comment_lint_package.result }}"
echo "arglint: ${{ needs.argument_comment_lint_prebuilt.result }}"
echo "general: ${{ needs.general.result }}"
echo "shear : ${{ needs.cargo_shear.result }}"
echo "lint : ${{ needs.lint_build.result }}"
echo "tests : ${{ needs.tests.result }}"
# If nothing relevant changed (PR touching only root README, etc.),
# declare success regardless of other jobs.
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' != 'true' && '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' ]]; then
if [[ '${{ needs.changed.outputs.codex }}' != 'true' && '${{ needs.changed.outputs.workflows }}' != 'true' && '${{ github.event_name }}' != 'push' ]]; then
echo 'No relevant changes -> CI not required.'
exit 0
fi
if [[ '${{ needs.changed.outputs.argument_comment_lint_package }}' == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_package.result }}' == 'success' ]] || { echo 'argument_comment_lint_package failed'; exit 1; }
fi
# Otherwise require the jobs to have succeeded
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
[[ '${{ needs.lint_build.result }}' == 'success' ]] || { echo 'lint_build failed'; exit 1; }
[[ '${{ needs.tests.result }}' == 'success' ]] || { echo 'tests failed'; exit 1; }
if [[ '${{ needs.changed.outputs.argument_comment_lint }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
[[ '${{ needs.argument_comment_lint_prebuilt.result }}' == 'success' ]] || { echo 'argument_comment_lint_prebuilt failed'; exit 1; }
fi
if [[ '${{ needs.changed.outputs.codex }}' == 'true' || '${{ needs.changed.outputs.workflows }}' == 'true' ]]; then
[[ '${{ needs.general.result }}' == 'success' ]] || { echo 'general failed'; exit 1; }
[[ '${{ needs.cargo_shear.result }}' == 'success' ]] || { echo 'cargo_shear failed'; exit 1; }
fi
- name: sccache summary note
if: always()
run: |
echo "Per-job sccache stats are attached to each matrix job's Step Summary."

View File

@@ -1,103 +0,0 @@
name: rust-release-argument-comment-lint
on:
workflow_call:
inputs:
publish:
required: true
type: boolean
jobs:
skip:
if: ${{ !inputs.publish }}
runs-on: ubuntu-latest
steps:
- run: echo "Skipping argument-comment-lint release assets for prerelease tag"
build:
if: ${{ inputs.publish }}
name: Build - ${{ matrix.runner }} - ${{ matrix.target }}
runs-on: ${{ matrix.runs_on || matrix.runner }}
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
archive_name: argument-comment-lint-aarch64-apple-darwin.tar.gz
lib_name: libargument_comment_lint@nightly-2025-09-18-aarch64-apple-darwin.dylib
runner_binary: argument-comment-lint
cargo_dylint_binary: cargo-dylint
- runner: ubuntu-24.04
target: x86_64-unknown-linux-gnu
archive_name: argument-comment-lint-x86_64-unknown-linux-gnu.tar.gz
lib_name: libargument_comment_lint@nightly-2025-09-18-x86_64-unknown-linux-gnu.so
runner_binary: argument-comment-lint
cargo_dylint_binary: cargo-dylint
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-gnu
archive_name: argument-comment-lint-aarch64-unknown-linux-gnu.tar.gz
lib_name: libargument_comment_lint@nightly-2025-09-18-aarch64-unknown-linux-gnu.so
runner_binary: argument-comment-lint
cargo_dylint_binary: cargo-dylint
- runner: windows-x64
target: x86_64-pc-windows-msvc
archive_name: argument-comment-lint-x86_64-pc-windows-msvc.zip
lib_name: argument_comment_lint@nightly-2025-09-18-x86_64-pc-windows-msvc.dll
runner_binary: argument-comment-lint.exe
cargo_dylint_binary: cargo-dylint.exe
runs_on:
group: codex-runners
labels: codex-windows-x64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
with:
toolchain: nightly-2025-09-18
targets: ${{ matrix.target }}
components: llvm-tools-preview, rustc-dev, rust-src
- name: Install tooling
shell: bash
run: |
install_root="${RUNNER_TEMP}/argument-comment-lint-tools"
cargo install --locked cargo-dylint --root "$install_root"
cargo install --locked dylint-link
echo "INSTALL_ROOT=$install_root" >> "$GITHUB_ENV"
- name: Cargo build
working-directory: tools/argument-comment-lint
shell: bash
run: cargo build --release --target ${{ matrix.target }}
- name: Stage artifact
shell: bash
run: |
dest="dist/argument-comment-lint/${{ matrix.target }}"
mkdir -p "$dest"
package_root="${RUNNER_TEMP}/argument-comment-lint"
rm -rf "$package_root"
mkdir -p "$package_root/bin" "$package_root/lib"
cp "tools/argument-comment-lint/target/${{ matrix.target }}/release/${{ matrix.runner_binary }}" \
"$package_root/bin/${{ matrix.runner_binary }}"
cp "${INSTALL_ROOT}/bin/${{ matrix.cargo_dylint_binary }}" \
"$package_root/bin/${{ matrix.cargo_dylint_binary }}"
cp "tools/argument-comment-lint/target/${{ matrix.target }}/release/${{ matrix.lib_name }}" \
"$package_root/lib/${{ matrix.lib_name }}"
archive_path="$dest/${{ matrix.archive_name }}"
if [[ "${{ runner.os }}" == "Windows" ]]; then
(cd "${RUNNER_TEMP}" && 7z a "$GITHUB_WORKSPACE/$archive_path" argument-comment-lint >/dev/null)
else
(cd "${RUNNER_TEMP}" && tar -czf "$GITHUB_WORKSPACE/$archive_path" argument-comment-lint)
fi
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: argument-comment-lint-${{ matrix.target }}
path: dist/argument-comment-lint/${{ matrix.target }}/*

View File

@@ -18,7 +18,7 @@ jobs:
if: github.repository == 'openai/codex'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
with:
ref: main
fetch-depth: 0
@@ -43,7 +43,7 @@ jobs:
curl --http1.1 --fail --show-error --location "${headers[@]}" "${url}" | jq '.' > codex-rs/core/models.json
- name: Open pull request (if changed)
uses: peter-evans/create-pull-request@c0f553fe549906ede9cf27b5156039d195d2ece0 # v8
uses: peter-evans/create-pull-request@v8
with:
commit-message: "Update models.json"
title: "Update models.json"

View File

@@ -67,7 +67,7 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Print runner specs (Windows)
shell: powershell
run: |
@@ -82,7 +82,7 @@ jobs:
Write-Host "Total RAM: $ramGiB GiB"
Write-Host "Disk usage:"
Get-PSDrive -PSProvider FileSystem | Format-Table -AutoSize Name, @{Name='Size(GB)';Expression={[math]::Round(($_.Used + $_.Free) / 1GB, 1)}}, @{Name='Free(GB)';Expression={[math]::Round($_.Free / 1GB, 1)}}
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
@@ -92,7 +92,7 @@ jobs:
cargo build --target ${{ matrix.target }} --release --timings ${{ matrix.build_args }}
- name: Upload Cargo timings
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-release-windows-${{ matrix.target }}-${{ matrix.bundle }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -112,7 +112,7 @@ jobs:
fi
- name: Upload Windows binaries
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
uses: actions/upload-artifact@v7
with:
name: windows-binaries-${{ matrix.target }}-${{ matrix.bundle }}
path: |
@@ -147,16 +147,16 @@ jobs:
labels: codex-windows-arm64
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Download prebuilt Windows primary binaries
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
uses: actions/download-artifact@v8
with:
name: windows-binaries-${{ matrix.target }}-primary
path: codex-rs/target/${{ matrix.target }}/release
- name: Download prebuilt Windows helper binaries
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
uses: actions/download-artifact@v8
with:
name: windows-binaries-${{ matrix.target }}-helpers
path: codex-rs/target/${{ matrix.target }}/release
@@ -193,7 +193,7 @@ jobs:
cp target/${{ matrix.target }}/release/codex-command-runner.exe "$dest/codex-command-runner-${{ matrix.target }}.exe"
- name: Install DotSlash
uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
uses: facebook/install-dotslash@v2
- name: Compress artifacts
shell: bash
@@ -257,7 +257,7 @@ jobs:
"${GITHUB_WORKSPACE}/.github/workflows/zstd" -T0 -19 "$dest/$base"
done
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
- uses: actions/upload-artifact@v7
with:
name: ${{ matrix.target }}
path: |

View File

@@ -1,95 +0,0 @@
name: rust-release-zsh
on:
workflow_call:
env:
ZSH_COMMIT: 77045ef899e53b9598bebc5a41db93a548a40ca6
ZSH_PATCH: codex-rs/shell-escalation/patches/zsh-exec-wrapper.patch
jobs:
linux:
name: Build zsh (Linux) - ${{ matrix.variant }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
container:
image: ${{ matrix.image }}
strategy:
fail-fast: false
matrix:
include:
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: ubuntu-24.04
image: ubuntu:24.04
archive_name: codex-zsh-x86_64-unknown-linux-musl.tar.gz
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-24.04
image: arm64v8/ubuntu:24.04
archive_name: codex-zsh-aarch64-unknown-linux-musl.tar.gz
steps:
- name: Install build prerequisites
shell: bash
run: |
set -euo pipefail
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y \
autoconf \
bison \
build-essential \
ca-certificates \
gettext \
git \
libncursesw5-dev
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
run: |
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*
darwin:
name: Build zsh (macOS) - ${{ matrix.variant }} - ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
variant: macos-15
archive_name: codex-zsh-aarch64-apple-darwin.tar.gz
steps:
- name: Install build prerequisites
shell: bash
run: |
set -euo pipefail
if ! command -v autoconf >/dev/null 2>&1; then
brew install autoconf
fi
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Build, smoke-test, and stage zsh artifact
shell: bash
run: |
"${GITHUB_WORKSPACE}/.github/scripts/build-zsh-release-artifact.sh" \
"dist/zsh/${{ matrix.target }}/${{ matrix.archive_name }}"
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: codex-zsh-${{ matrix.target }}
path: dist/zsh/${{ matrix.target }}/*

View File

@@ -19,8 +19,8 @@ jobs:
tag-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: dtolnay/rust-toolchain@c2b55edffaf41a251c410bb32bed22afefa800f1 # 1.92
- uses: actions/checkout@v6
- uses: dtolnay/rust-toolchain@1.92
- name: Validate tag matches Cargo.toml version
shell: bash
run: |
@@ -79,7 +79,7 @@ jobs:
target: aarch64-unknown-linux-gnu
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- uses: actions/checkout@v6
- name: Print runner specs (Linux)
if: ${{ runner.os == 'Linux' }}
shell: bash
@@ -125,7 +125,7 @@ jobs:
sudo apt-get update -y
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y libubsan1
fi
- uses: dtolnay/rust-toolchain@a0b273b48ed29de4470960879e8381ff45632f26 # 1.93.0
- uses: dtolnay/rust-toolchain@1.93.0
with:
targets: ${{ matrix.target }}
@@ -142,7 +142,7 @@ jobs:
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl'}}
name: Install Zig
uses: mlugg/setup-zig@d1434d08867e3ee9daa34448df10607b98908d29 # v2
uses: mlugg/setup-zig@v2
with:
version: 0.14.0
@@ -210,24 +210,6 @@ jobs:
echo "CFLAGS=${cflags}" >> "$GITHUB_ENV"
echo "CXXFLAGS=${cxxflags}" >> "$GITHUB_ENV"
- if: ${{ matrix.target == 'x86_64-unknown-linux-musl' || matrix.target == 'aarch64-unknown-linux-musl' }}
name: Configure musl rusty_v8 artifact overrides
env:
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
version="$(python3 "${GITHUB_WORKSPACE}/.github/scripts/rusty_v8_bazel.py" resolved-v8-crate-version)"
release_tag="rusty-v8-v${version}"
base_url="https://github.com/openai/codex/releases/download/${release_tag}"
archive="https://github.com/openai/codex/releases/download/rusty-v8-v${version}/librusty_v8_release_${TARGET}.a.gz"
binding_dir="${RUNNER_TEMP}/rusty_v8"
binding_path="${binding_dir}/src_binding_release_${TARGET}.rs"
mkdir -p "${binding_dir}"
curl -fsSL "${base_url}/src_binding_release_${TARGET}.rs" -o "${binding_path}"
echo "RUSTY_V8_ARCHIVE=${archive}" >> "$GITHUB_ENV"
echo "RUSTY_V8_SRC_BINDING_PATH=${binding_path}" >> "$GITHUB_ENV"
- name: Cargo build
shell: bash
run: |
@@ -235,7 +217,7 @@ jobs:
cargo build --target ${{ matrix.target }} --release --timings --bin codex --bin codex-responses-api-proxy
- name: Upload Cargo timings
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
uses: actions/upload-artifact@v7
with:
name: cargo-timings-rust-release-${{ matrix.target }}
path: codex-rs/target/**/cargo-timings/cargo-timing.html
@@ -374,7 +356,7 @@ jobs:
zstd -T0 -19 --rm "$dest/$base"
done
- uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
- uses: actions/upload-artifact@v7
with:
name: ${{ matrix.target }}
# Upload the per-binary .zst files as well as the new .tar.gz
@@ -389,24 +371,20 @@ jobs:
release-lto: ${{ contains(github.ref_name, '-alpha') && 'thin' || 'fat' }}
secrets: inherit
argument-comment-lint-release-assets:
name: argument-comment-lint release assets
shell-tool-mcp:
name: shell-tool-mcp
needs: tag-check
uses: ./.github/workflows/rust-release-argument-comment-lint.yml
uses: ./.github/workflows/shell-tool-mcp.yml
with:
release-tag: ${{ github.ref_name }}
publish: true
zsh-release-assets:
name: zsh release assets
needs: tag-check
uses: ./.github/workflows/rust-release-zsh.yml
secrets: inherit
release:
needs:
- build
- build-windows
- argument-comment-lint-release-assets
- zsh-release-assets
- shell-tool-mcp
name: release
runs-on: ubuntu-latest
permissions:
@@ -420,7 +398,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
uses: actions/checkout@v6
- name: Generate release notes from tag commit message
id: release_notes
@@ -442,15 +420,18 @@ jobs:
echo "path=${notes_path}" >> "${GITHUB_OUTPUT}"
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
- uses: actions/download-artifact@v8
with:
path: dist
- name: List
run: ls -R dist/
# This is a temporary fix: we should modify shell-tool-mcp.yml so these
# files do not end up in dist/ in the first place.
- name: Delete entries from dist/ that should not go in the release
run: |
rm -rf dist/shell-tool-mcp*
rm -rf dist/windows-binaries*
# cargo-timing.html appears under multiple target-specific directories.
# If included in files: dist/**, release upload races on duplicate
@@ -492,12 +473,12 @@ jobs:
fi
- name: Setup pnpm
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Setup Node.js for npm packaging
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
uses: actions/setup-node@v6
with:
node-version: 22
@@ -505,14 +486,13 @@ jobs:
run: pnpm install --frozen-lockfile
# stage_npm_packages.py requires DotSlash when staging releases.
- uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2
- uses: facebook/install-dotslash@v2
- name: Stage npm packages
env:
GH_TOKEN: ${{ github.token }}
RELEASE_VERSION: ${{ steps.release_name.outputs.name }}
run: |
./scripts/stage_npm_packages.py \
--release-version "$RELEASE_VERSION" \
--release-version "${{ steps.release_name.outputs.name }}" \
--package codex \
--package codex-responses-api-proxy \
--package codex-sdk
@@ -523,7 +503,7 @@ jobs:
cp scripts/install/install.ps1 dist/install.ps1
- name: Create GitHub Release
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
uses: softprops/action-gh-release@v2
with:
name: ${{ steps.release_name.outputs.name }}
tag_name: ${{ github.ref_name }}
@@ -533,27 +513,13 @@ jobs:
# (e.g. -alpha, -beta). Otherwise publish a normal release.
prerelease: ${{ contains(steps.release_name.outputs.name, '-') }}
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
- uses: facebook/dotslash-publish-release@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-config.json
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-zsh-config.json
- uses: facebook/dotslash-publish-release@9c9ec027515c34db9282a09a25a9cab5880b2c52 # v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag: ${{ github.ref_name }}
config: .github/dotslash-argument-comment-lint-config.json
- name: Trigger developers.openai.com deploy
# Only trigger the deploy if the release is not a pre-release.
# The deploy is used to update the developers.openai.com website with the new config schema json file.
@@ -582,7 +548,7 @@ jobs:
steps:
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
uses: actions/setup-node@v6
with:
node-version: 22
registry-url: "https://registry.npmjs.org"
@@ -595,12 +561,10 @@ jobs:
- name: Download npm tarballs from release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ needs.release.outputs.tag }}
RELEASE_VERSION: ${{ needs.release.outputs.version }}
run: |
set -euo pipefail
version="$RELEASE_VERSION"
tag="$RELEASE_TAG"
version="${{ needs.release.outputs.version }}"
tag="${{ needs.release.outputs.tag }}"
mkdir -p dist/npm
patterns=(
"codex-npm-${version}.tgz"
@@ -693,7 +657,7 @@ jobs:
steps:
- name: Publish to WinGet
uses: vedantmgoyal9/winget-releaser@7bd472be23763def6e16bd06cc8b1cdfab0e2fd5
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
identifier: OpenAI.Codex
version: ${{ needs.release.outputs.version }}

View File

@@ -1,188 +0,0 @@
name: rusty-v8-release
on:
workflow_dispatch:
inputs:
release_tag:
description: Optional release tag. Defaults to rusty-v8-v<resolved_v8_version>.
required: false
type: string
publish:
description: Publish the staged musl artifacts to a GitHub release.
required: false
default: true
type: boolean
concurrency:
group: ${{ github.workflow }}::${{ inputs.release_tag || github.run_id }}
cancel-in-progress: false
jobs:
metadata:
runs-on: ubuntu-latest
outputs:
release_tag: ${{ steps.release_tag.outputs.release_tag }}
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
- name: Resolve exact v8 crate version
id: v8_version
shell: bash
run: |
set -euo pipefail
version="$(python3 .github/scripts/rusty_v8_bazel.py resolved-v8-crate-version)"
echo "version=${version}" >> "$GITHUB_OUTPUT"
- name: Resolve release tag
id: release_tag
env:
RELEASE_TAG_INPUT: ${{ inputs.release_tag }}
V8_VERSION: ${{ steps.v8_version.outputs.version }}
shell: bash
run: |
set -euo pipefail
release_tag="${RELEASE_TAG_INPUT}"
if [[ -z "${release_tag}" ]]; then
release_tag="rusty-v8-v${V8_VERSION}"
fi
echo "release_tag=${release_tag}" >> "$GITHUB_OUTPUT"
build:
name: Build ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
permissions:
contents: read
actions: read
strategy:
fail-fast: false
matrix:
include:
- runner: ubuntu-24.04
platform: linux_amd64_musl
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04-arm
platform: linux_arm64_musl
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@6ecf4fd8b7d1f9721785f1dd656a689acf9add47 # v3
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
- name: Build Bazel V8 release pair
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
PLATFORM: ${{ matrix.platform }}
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
target_suffix="${TARGET//-/_}"
pair_target="//third_party/v8:rusty_v8_release_pair_${target_suffix}"
extra_targets=()
if [[ "${TARGET}" == *-unknown-linux-musl ]]; then
extra_targets=(
"@llvm//runtimes/libcxx:libcxx.static"
"@llvm//runtimes/libcxx:libcxxabi.static"
)
fi
bazel_args=(
build
-c
opt
"--platforms=@llvm//platforms:${PLATFORM}"
"${pair_target}"
"${extra_targets[@]}"
--build_metadata=COMMIT_SHA=$(git rev-parse HEAD)
)
bazel \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
--config=ci-v8 \
"--remote_header=x-buildbuddy-api-key=${BUILDBUDDY_API_KEY}"
- name: Stage release pair
env:
PLATFORM: ${{ matrix.platform }}
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
python3 .github/scripts/rusty_v8_bazel.py stage-release-pair \
--platform "${PLATFORM}" \
--target "${TARGET}" \
--compilation-mode opt \
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: rusty-v8-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*
publish-release:
if: ${{ inputs.publish }}
needs:
- metadata
- build
runs-on: ubuntu-latest
permissions:
contents: write
actions: read
steps:
- name: Ensure publishing from default branch
if: ${{ github.ref_name != github.event.repository.default_branch }}
env:
DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
shell: bash
run: |
set -euo pipefail
echo "Publishing is only allowed from ${DEFAULT_BRANCH}; current ref is ${GITHUB_REF_NAME}." >&2
exit 1
- name: Ensure release tag is new
env:
GH_TOKEN: ${{ github.token }}
RELEASE_TAG: ${{ needs.metadata.outputs.release_tag }}
shell: bash
run: |
set -euo pipefail
if gh release view "${RELEASE_TAG}" --repo "${GITHUB_REPOSITORY}" > /dev/null 2>&1; then
echo "Release tag ${RELEASE_TAG} already exists; musl artifact tags are immutable." >&2
exit 1
fi
- uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
with:
path: dist
- name: Create GitHub Release
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
tag_name: ${{ needs.metadata.outputs.release_tag }}
name: ${{ needs.metadata.outputs.release_tag }}
files: dist/**
# Keep V8 artifact releases out of Codex's normal "latest release" channel.
prerelease: true

View File

@@ -7,13 +7,11 @@ on:
jobs:
sdks:
runs-on:
group: codex-runners
labels: codex-linux-x64
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
uses: actions/checkout@v6
- name: Install Linux bwrap build dependencies
shell: bash
@@ -23,82 +21,21 @@ jobs:
sudo DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends pkg-config libcap-dev
- name: Setup pnpm
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6
uses: actions/setup-node@v6
with:
node-version: 22
cache: pnpm
- name: Set up Bazel CI
id: setup_bazel
uses: ./.github/actions/setup-bazel-ci
with:
target: x86_64-unknown-linux-gnu
- uses: dtolnay/rust-toolchain@1.93.0
- name: Build codex with Bazel
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
shell: bash
run: |
set -euo pipefail
# Use the shared CI wrapper so fork PRs fall back cleanly when
# BuildBuddy credentials are unavailable. This workflow needs the
# built `codex` binary on disk afterwards, so ask the wrapper to
# override CI's default remote_download_minimal behavior.
./.github/scripts/run-bazel-ci.sh \
--remote-download-toplevel \
-- \
build \
--build_metadata=COMMIT_SHA=${GITHUB_SHA} \
--build_metadata=TAG_job=sdk \
-- \
//codex-rs/cli:codex
# Resolve the exact output file using the same wrapper/config path as
# the build instead of guessing which Bazel convenience symlink is
# available on the runner.
cquery_output="$(
./.github/scripts/run-bazel-ci.sh \
-- \
cquery \
--output=files \
-- \
//codex-rs/cli:codex \
| grep -E '^(/|bazel-out/)' \
| tail -n 1
)"
if [[ "${cquery_output}" = /* ]]; then
codex_bazel_output_path="${cquery_output}"
else
codex_bazel_output_path="${GITHUB_WORKSPACE}/${cquery_output}"
fi
if [[ -z "${codex_bazel_output_path}" ]]; then
echo "Bazel did not report an output path for //codex-rs/cli:codex." >&2
exit 1
fi
if [[ ! -e "${codex_bazel_output_path}" ]]; then
echo "Unable to locate the Bazel-built codex binary at ${codex_bazel_output_path}." >&2
exit 1
fi
# Stage the binary into the workspace and point the SDK tests at that
# stable path. The tests spawn `codex` directly many times, so using a
# normal executable path is more reliable than invoking Bazel for each
# test process.
install_dir="${GITHUB_WORKSPACE}/.tmp/sdk-ci"
mkdir -p "${install_dir}"
install -m 755 "${codex_bazel_output_path}" "${install_dir}/codex"
echo "CODEX_EXEC_PATH=${install_dir}/codex" >> "$GITHUB_ENV"
- name: Warm up Bazel-built codex
shell: bash
run: |
set -euo pipefail
"${CODEX_EXEC_PATH}" --version
- name: build codex
run: cargo build --bin codex
working-directory: codex-rs
- name: Install dependencies
run: pnpm install --frozen-lockfile
@@ -111,12 +48,3 @@ jobs:
- name: Test SDK packages
run: pnpm -r --filter ./sdk/typescript run test
- name: Save bazel repository cache
if: always() && !cancelled() && steps.setup_bazel.outputs.cache-hit != 'true'
continue-on-error: true
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: |
~/.cache/bazel-repo-cache
key: bazel-cache-x86_64-unknown-linux-gnu-${{ hashFiles('MODULE.bazel', 'codex-rs/Cargo.lock', 'codex-rs/Cargo.toml') }}

48
.github/workflows/shell-tool-mcp-ci.yml vendored Normal file
View File

@@ -0,0 +1,48 @@
name: shell-tool-mcp CI
on:
push:
paths:
- "shell-tool-mcp/**"
- ".github/workflows/shell-tool-mcp-ci.yml"
- "pnpm-lock.yaml"
- "pnpm-workspace.yaml"
pull_request:
paths:
- "shell-tool-mcp/**"
- ".github/workflows/shell-tool-mcp-ci.yml"
- "pnpm-lock.yaml"
- "pnpm-workspace.yaml"
env:
NODE_VERSION: 22
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: ${{ env.NODE_VERSION }}
cache: "pnpm"
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Format check
run: pnpm --filter @openai/codex-shell-tool-mcp run format
- name: Run tests
run: pnpm --filter @openai/codex-shell-tool-mcp test
- name: Build
run: pnpm --filter @openai/codex-shell-tool-mcp run build

548
.github/workflows/shell-tool-mcp.yml vendored Normal file
View File

@@ -0,0 +1,548 @@
name: shell-tool-mcp
on:
workflow_call:
inputs:
release-version:
description: Version to publish (x.y.z or x.y.z-alpha.N). Defaults to GITHUB_REF_NAME when it starts with rust-v.
required: false
type: string
release-tag:
description: Tag name to use when downloading release artifacts (defaults to rust-v<version>).
required: false
type: string
publish:
description: Whether to publish to npm when the version is releasable.
required: false
default: true
type: boolean
env:
NODE_VERSION: 22
jobs:
metadata:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.compute.outputs.version }}
release_tag: ${{ steps.compute.outputs.release_tag }}
should_publish: ${{ steps.compute.outputs.should_publish }}
npm_tag: ${{ steps.compute.outputs.npm_tag }}
steps:
- name: Compute version and tags
id: compute
run: |
set -euo pipefail
version="${{ inputs.release-version }}"
release_tag="${{ inputs.release-tag }}"
if [[ -z "$version" ]]; then
if [[ -n "$release_tag" && "$release_tag" =~ ^rust-v.+ ]]; then
version="${release_tag#rust-v}"
elif [[ "${GITHUB_REF_NAME:-}" =~ ^rust-v.+ ]]; then
version="${GITHUB_REF_NAME#rust-v}"
release_tag="${GITHUB_REF_NAME}"
else
echo "release-version is required when GITHUB_REF_NAME is not a rust-v tag."
exit 1
fi
fi
if [[ -z "$release_tag" ]]; then
release_tag="rust-v${version}"
fi
npm_tag=""
should_publish="false"
if [[ "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
should_publish="true"
elif [[ "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+-alpha\.[0-9]+$ ]]; then
should_publish="true"
npm_tag="alpha"
fi
echo "version=${version}" >> "$GITHUB_OUTPUT"
echo "release_tag=${release_tag}" >> "$GITHUB_OUTPUT"
echo "npm_tag=${npm_tag}" >> "$GITHUB_OUTPUT"
echo "should_publish=${should_publish}" >> "$GITHUB_OUTPUT"
bash-linux:
name: Build Bash (Linux) - ${{ matrix.variant }} - ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
container:
image: ${{ matrix.image }}
strategy:
fail-fast: false
matrix:
include:
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: ubuntu-24.04
image: ubuntu:24.04
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: ubuntu-22.04
image: ubuntu:22.04
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: debian-12
image: debian:12
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: debian-11
image: debian:11
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: centos-9
image: quay.io/centos/centos:stream9
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-24.04
image: arm64v8/ubuntu:24.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-22.04
image: arm64v8/ubuntu:22.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-20.04
image: arm64v8/ubuntu:20.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: debian-12
image: arm64v8/debian:12
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: debian-11
image: arm64v8/debian:11
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: centos-9
image: quay.io/centos/centos:stream9
steps:
- name: Install build prerequisites
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y git build-essential bison autoconf gettext libncursesw5-dev
elif command -v dnf >/dev/null 2>&1; then
dnf install -y git gcc gcc-c++ make bison autoconf gettext ncurses-devel
elif command -v yum >/dev/null 2>&1; then
yum install -y git gcc gcc-c++ make bison autoconf gettext ncurses-devel
else
echo "Unsupported package manager in container"
exit 1
fi
- name: Checkout repository
uses: actions/checkout@v6
- name: Build patched Bash
shell: bash
run: |
set -euo pipefail
git clone https://git.savannah.gnu.org/git/bash /tmp/bash
cd /tmp/bash
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/bash-exec-wrapper.patch"
./configure --without-bash-malloc
cores="$(command -v nproc >/dev/null 2>&1 && nproc || getconf _NPROCESSORS_ONLN)"
make -j"${cores}"
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/bash/${{ matrix.variant }}"
mkdir -p "$dest"
cp bash "$dest/bash"
- uses: actions/upload-artifact@v7
with:
name: shell-tool-mcp-bash-${{ matrix.target }}-${{ matrix.variant }}
path: artifacts/**
if-no-files-found: error
bash-darwin:
name: Build Bash (macOS) - ${{ matrix.variant }} - ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
variant: macos-15
- runner: macos-14
target: aarch64-apple-darwin
variant: macos-14
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Build patched Bash
shell: bash
run: |
set -euo pipefail
git clone https://git.savannah.gnu.org/git/bash /tmp/bash
cd /tmp/bash
git checkout a8a1c2fac029404d3f42cd39f5a20f24b6e4fe4b
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/bash-exec-wrapper.patch"
./configure --without-bash-malloc
cores="$(getconf _NPROCESSORS_ONLN)"
make -j"${cores}"
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/bash/${{ matrix.variant }}"
mkdir -p "$dest"
cp bash "$dest/bash"
- uses: actions/upload-artifact@v7
with:
name: shell-tool-mcp-bash-${{ matrix.target }}-${{ matrix.variant }}
path: artifacts/**
if-no-files-found: error
zsh-linux:
name: Build zsh (Linux) - ${{ matrix.variant }} - ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
container:
image: ${{ matrix.image }}
strategy:
fail-fast: false
matrix:
include:
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: ubuntu-24.04
image: ubuntu:24.04
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: ubuntu-22.04
image: ubuntu:22.04
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: debian-12
image: debian:12
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: debian-11
image: debian:11
- runner: ubuntu-24.04
target: x86_64-unknown-linux-musl
variant: centos-9
image: quay.io/centos/centos:stream9
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-24.04
image: arm64v8/ubuntu:24.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-22.04
image: arm64v8/ubuntu:22.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: ubuntu-20.04
image: arm64v8/ubuntu:20.04
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: debian-12
image: arm64v8/debian:12
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: debian-11
image: arm64v8/debian:11
- runner: ubuntu-24.04-arm
target: aarch64-unknown-linux-musl
variant: centos-9
image: quay.io/centos/centos:stream9
steps:
- name: Install build prerequisites
shell: bash
run: |
set -euo pipefail
if command -v apt-get >/dev/null 2>&1; then
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y git build-essential bison autoconf gettext libncursesw5-dev
elif command -v dnf >/dev/null 2>&1; then
dnf install -y git gcc gcc-c++ make bison autoconf gettext ncurses-devel
elif command -v yum >/dev/null 2>&1; then
yum install -y git gcc gcc-c++ make bison autoconf gettext ncurses-devel
else
echo "Unsupported package manager in container"
exit 1
fi
- name: Checkout repository
uses: actions/checkout@v6
- name: Build patched zsh
shell: bash
run: |
set -euo pipefail
git clone https://git.code.sf.net/p/zsh/code /tmp/zsh
cd /tmp/zsh
git checkout 77045ef899e53b9598bebc5a41db93a548a40ca6
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/zsh-exec-wrapper.patch"
./Util/preconfig
./configure
cores="$(command -v nproc >/dev/null 2>&1 && nproc || getconf _NPROCESSORS_ONLN)"
make -j"${cores}"
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/zsh/${{ matrix.variant }}"
mkdir -p "$dest"
cp Src/zsh "$dest/zsh"
- name: Smoke test zsh exec wrapper
shell: bash
run: |
set -euo pipefail
tmpdir="$(mktemp -d)"
cat > "$tmpdir/exec-wrapper" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
: "${CODEX_WRAPPER_LOG:?missing CODEX_WRAPPER_LOG}"
printf '%s\n' "$@" > "$CODEX_WRAPPER_LOG"
file="$1"
shift
if [[ "$#" -eq 0 ]]; then
exec "$file"
fi
arg0="$1"
shift
exec -a "$arg0" "$file" "$@"
EOF
chmod +x "$tmpdir/exec-wrapper"
CODEX_WRAPPER_LOG="$tmpdir/wrapper.log" \
EXEC_WRAPPER="$tmpdir/exec-wrapper" \
/tmp/zsh/Src/zsh -fc '/bin/echo smoke-zsh' > "$tmpdir/stdout.txt"
grep -Fx "smoke-zsh" "$tmpdir/stdout.txt"
grep -Fx "/bin/echo" "$tmpdir/wrapper.log"
- uses: actions/upload-artifact@v7
with:
name: shell-tool-mcp-zsh-${{ matrix.target }}-${{ matrix.variant }}
path: artifacts/**
if-no-files-found: error
zsh-darwin:
name: Build zsh (macOS) - ${{ matrix.variant }} - ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- runner: macos-15-xlarge
target: aarch64-apple-darwin
variant: macos-15
- runner: macos-14
target: aarch64-apple-darwin
variant: macos-14
steps:
- name: Install build prerequisites
shell: bash
run: |
set -euo pipefail
if ! command -v autoconf >/dev/null 2>&1; then
brew install autoconf
fi
- name: Checkout repository
uses: actions/checkout@v6
- name: Build patched zsh
shell: bash
run: |
set -euo pipefail
git clone https://git.code.sf.net/p/zsh/code /tmp/zsh
cd /tmp/zsh
git checkout 77045ef899e53b9598bebc5a41db93a548a40ca6
git apply "${GITHUB_WORKSPACE}/shell-tool-mcp/patches/zsh-exec-wrapper.patch"
./Util/preconfig
./configure
cores="$(getconf _NPROCESSORS_ONLN)"
make -j"${cores}"
dest="${GITHUB_WORKSPACE}/artifacts/vendor/${{ matrix.target }}/zsh/${{ matrix.variant }}"
mkdir -p "$dest"
cp Src/zsh "$dest/zsh"
- name: Smoke test zsh exec wrapper
shell: bash
run: |
set -euo pipefail
tmpdir="$(mktemp -d)"
cat > "$tmpdir/exec-wrapper" <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
: "${CODEX_WRAPPER_LOG:?missing CODEX_WRAPPER_LOG}"
printf '%s\n' "$@" > "$CODEX_WRAPPER_LOG"
file="$1"
shift
if [[ "$#" -eq 0 ]]; then
exec "$file"
fi
arg0="$1"
shift
exec -a "$arg0" "$file" "$@"
EOF
chmod +x "$tmpdir/exec-wrapper"
CODEX_WRAPPER_LOG="$tmpdir/wrapper.log" \
EXEC_WRAPPER="$tmpdir/exec-wrapper" \
/tmp/zsh/Src/zsh -fc '/bin/echo smoke-zsh' > "$tmpdir/stdout.txt"
grep -Fx "smoke-zsh" "$tmpdir/stdout.txt"
grep -Fx "/bin/echo" "$tmpdir/wrapper.log"
- uses: actions/upload-artifact@v7
with:
name: shell-tool-mcp-zsh-${{ matrix.target }}-${{ matrix.variant }}
path: artifacts/**
if-no-files-found: error
package:
name: Package npm module
needs:
- metadata
- bash-linux
- bash-darwin
- zsh-linux
- zsh-darwin
runs-on: ubuntu-latest
env:
PACKAGE_VERSION: ${{ needs.metadata.outputs.version }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
run_install: false
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: ${{ env.NODE_VERSION }}
- name: Install JavaScript dependencies
run: pnpm install --frozen-lockfile
- name: Build (shell-tool-mcp)
run: pnpm --filter @openai/codex-shell-tool-mcp run build
- name: Download build artifacts
uses: actions/download-artifact@v8
with:
path: artifacts
- name: Assemble staging directory
id: staging
shell: bash
run: |
set -euo pipefail
staging="${STAGING_DIR}"
mkdir -p "$staging" "$staging/vendor"
cp shell-tool-mcp/README.md "$staging/"
cp shell-tool-mcp/package.json "$staging/"
found_vendor="false"
shopt -s nullglob
for vendor_dir in artifacts/*/vendor; do
rsync -av "$vendor_dir/" "$staging/vendor/"
found_vendor="true"
done
if [[ "$found_vendor" == "false" ]]; then
echo "No vendor payloads were downloaded."
exit 1
fi
node - <<'NODE'
import fs from "node:fs";
import path from "node:path";
const stagingDir = process.env.STAGING_DIR;
const version = process.env.PACKAGE_VERSION;
const pkgPath = path.join(stagingDir, "package.json");
const pkg = JSON.parse(fs.readFileSync(pkgPath, "utf8"));
pkg.version = version;
fs.writeFileSync(pkgPath, JSON.stringify(pkg, null, 2) + "\n");
NODE
echo "dir=$staging" >> "$GITHUB_OUTPUT"
env:
STAGING_DIR: ${{ runner.temp }}/shell-tool-mcp
- name: Ensure binaries are executable
run: |
set -euo pipefail
staging="${{ steps.staging.outputs.dir }}"
chmod +x \
"$staging"/vendor/*/bash/*/bash \
"$staging"/vendor/*/zsh/*/zsh
- name: Create npm tarball
shell: bash
run: |
set -euo pipefail
mkdir -p dist/npm
staging="${{ steps.staging.outputs.dir }}"
pack_info=$(cd "$staging" && npm pack --ignore-scripts --json --pack-destination "${GITHUB_WORKSPACE}/dist/npm")
filename=$(PACK_INFO="$pack_info" node -e 'const data = JSON.parse(process.env.PACK_INFO); console.log(data[0].filename);')
mv "dist/npm/${filename}" "dist/npm/codex-shell-tool-mcp-npm-${PACKAGE_VERSION}.tgz"
- uses: actions/upload-artifact@v7
with:
name: codex-shell-tool-mcp-npm
path: dist/npm/codex-shell-tool-mcp-npm-${{ env.PACKAGE_VERSION }}.tgz
if-no-files-found: error
publish:
name: Publish npm package
needs:
- metadata
- package
if: ${{ inputs.publish && needs.metadata.outputs.should_publish == 'true' }}
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: ${{ env.NODE_VERSION }}
registry-url: https://registry.npmjs.org
scope: "@openai"
# Trusted publishing requires npm CLI version 11.5.1 or later.
- name: Update npm
run: npm install -g npm@latest
- name: Download npm tarball
uses: actions/download-artifact@v8
with:
name: codex-shell-tool-mcp-npm
path: dist/npm
- name: Publish to npm
env:
NPM_TAG: ${{ needs.metadata.outputs.npm_tag }}
VERSION: ${{ needs.metadata.outputs.version }}
shell: bash
run: |
set -euo pipefail
tag_args=()
if [[ -n "${NPM_TAG}" ]]; then
tag_args+=(--tag "${NPM_TAG}")
fi
npm publish "dist/npm/codex-shell-tool-mcp-npm-${VERSION}.tgz" "${tag_args[@]}"

View File

@@ -1,132 +0,0 @@
name: v8-canary
on:
pull_request:
paths:
- ".github/scripts/rusty_v8_bazel.py"
- ".github/workflows/rusty-v8-release.yml"
- ".github/workflows/v8-canary.yml"
- "MODULE.bazel"
- "MODULE.bazel.lock"
- "codex-rs/Cargo.toml"
- "patches/BUILD.bazel"
- "patches/v8_*.patch"
- "third_party/v8/**"
push:
branches:
- main
paths:
- ".github/scripts/rusty_v8_bazel.py"
- ".github/workflows/rusty-v8-release.yml"
- ".github/workflows/v8-canary.yml"
- "MODULE.bazel"
- "MODULE.bazel.lock"
- "codex-rs/Cargo.toml"
- "patches/BUILD.bazel"
- "patches/v8_*.patch"
- "third_party/v8/**"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}::${{ github.event.pull_request.number > 0 && format('pr-{0}', github.event.pull_request.number) || github.ref_name }}
cancel-in-progress: ${{ github.ref_name != 'main' }}
jobs:
metadata:
runs-on: ubuntu-latest
outputs:
v8_version: ${{ steps.v8_version.outputs.version }}
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
- name: Resolve exact v8 crate version
id: v8_version
shell: bash
run: |
set -euo pipefail
version="$(python3 .github/scripts/rusty_v8_bazel.py resolved-v8-crate-version)"
echo "version=${version}" >> "$GITHUB_OUTPUT"
build:
name: Build ${{ matrix.target }}
needs: metadata
runs-on: ${{ matrix.runner }}
permissions:
contents: read
actions: read
strategy:
fail-fast: false
matrix:
include:
- runner: ubuntu-24.04
platform: linux_amd64_musl
target: x86_64-unknown-linux-musl
- runner: ubuntu-24.04-arm
platform: linux_arm64_musl
target: aarch64-unknown-linux-musl
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@6ecf4fd8b7d1f9721785f1dd656a689acf9add47 # v3
- name: Set up Python
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6
with:
python-version: "3.12"
- name: Build Bazel V8 release pair
env:
BUILDBUDDY_API_KEY: ${{ secrets.BUILDBUDDY_API_KEY }}
PLATFORM: ${{ matrix.platform }}
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
target_suffix="${TARGET//-/_}"
pair_target="//third_party/v8:rusty_v8_release_pair_${target_suffix}"
extra_targets=(
"@llvm//runtimes/libcxx:libcxx.static"
"@llvm//runtimes/libcxx:libcxxabi.static"
)
bazel_args=(
build
"--platforms=@llvm//platforms:${PLATFORM}"
"${pair_target}"
"${extra_targets[@]}"
--build_metadata=COMMIT_SHA=$(git rev-parse HEAD)
)
bazel \
--noexperimental_remote_repo_contents_cache \
"${bazel_args[@]}" \
--config=ci-v8 \
"--remote_header=x-buildbuddy-api-key=${BUILDBUDDY_API_KEY}"
- name: Stage release pair
env:
PLATFORM: ${{ matrix.platform }}
TARGET: ${{ matrix.target }}
shell: bash
run: |
set -euo pipefail
python3 .github/scripts/rusty_v8_bazel.py stage-release-pair \
--platform "${PLATFORM}" \
--target "${TARGET}" \
--output-dir "dist/${TARGET}"
- name: Upload staged musl artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: v8-canary-${{ needs.metadata.outputs.v8_version }}-${{ matrix.target }}
path: dist/${{ matrix.target }}/*

1
.gitignore vendored
View File

@@ -10,7 +10,6 @@ node_modules
# build
dist/
bazel-*
user.bazelrc
build/
out/
storybook-static/

View File

@@ -11,13 +11,7 @@ In the codex-rs folder where the rust code lives:
- Always collapse if statements per https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if
- Always inline format! args when possible per https://rust-lang.github.io/rust-clippy/master/index.html#uninlined_format_args
- Use method references over closures when possible per https://rust-lang.github.io/rust-clippy/master/index.html#redundant_closure_for_method_calls
- Avoid bool or ambiguous `Option` parameters that force callers to write hard-to-read code such as `foo(false)` or `bar(None)`. Prefer enums, named methods, newtypes, or other idiomatic Rust API shapes when they keep the callsite self-documenting.
- When you cannot make that API change and still need a small positional-literal callsite in Rust, follow the `argument_comment_lint` convention:
- Use an exact `/*param_name*/` comment before opaque literal arguments such as `None`, booleans, and numeric literals when passing them by position.
- Do not add these comments for string or char literals unless the comment adds real clarity; those literals are intentionally exempt from the lint.
- If you add one of these comments, the parameter name must exactly match the callee signature.
- When possible, make `match` statements exhaustive and avoid wildcard arms.
- Newly added traits should include doc comments that explain their role and how implementations are expected to use them.
- When writing tests, prefer comparing the equality of entire objects over fields one by one.
- When making a change that adds or changes an API, ensure that the documentation in the `docs/` folder is up to date if applicable.
- If you change `ConfigToml` or nested config types, run `just write-config-schema` to update `codex-rs/core/config.schema.json`.
@@ -25,23 +19,7 @@ In the codex-rs folder where the rust code lives:
repo root to refresh `MODULE.bazel.lock`, and include that lockfile update in the same change.
- After dependency changes, run `just bazel-lock-check` from the repo root so lockfile drift is caught
locally before CI.
- Bazel does not automatically make source-tree files available to compile-time Rust file access. If
you add `include_str!`, `include_bytes!`, `sqlx::migrate!`, or similar build-time file or
directory reads, update the crate's `BUILD.bazel` (`compile_data`, `build_script_data`, or test
data) or Bazel may fail even when Cargo passes.
- Do not create small helper methods that are referenced only once.
- Avoid large modules:
- Prefer adding new modules instead of growing existing ones.
- Target Rust modules under 500 LoC, excluding tests.
- If a file exceeds roughly 800 LoC, add new functionality in a new module instead of extending
the existing file unless there is a strong documented reason not to.
- This rule applies especially to high-touch files that already attract unrelated changes, such
as `codex-rs/tui/src/app.rs`, `codex-rs/tui/src/bottom_pane/chat_composer.rs`,
`codex-rs/tui/src/bottom_pane/footer.rs`, `codex-rs/tui/src/chatwidget.rs`,
`codex-rs/tui/src/bottom_pane/mod.rs`, and similarly central orchestration modules.
- When extracting code from a large module, move the related tests and module/type docs toward
the new implementation so the invariants stay close to the code that owns them.
- When running Rust commands (e.g. `just fix` or `cargo test`) be patient with the command and never try to kill them using the PID. Rust lock can make the execution slow, this is expected.
Run `just fmt` (in `codex-rs` directory) automatically after you have finished making Rust code changes; do not ask for approval to run it. Additionally, run the tests:
@@ -50,21 +28,6 @@ Run `just fmt` (in `codex-rs` directory) automatically after you have finished m
Before finalizing a large change to `codex-rs`, run `just fix -p <project>` (in `codex-rs` directory) to fix any linter issues in the code. Prefer scoping with `-p` to avoid slow workspacewide Clippy builds; only run `just fix` without `-p` if you changed shared crates. Do not re-run tests after running `fix` or `fmt`.
Also run `just argument-comment-lint` to ensure the codebase is clean of comment lint errors.
## The `codex-core` crate
Over time, the `codex-core` crate (defined in `codex-rs/core/`) has become bloated because it is the largest crate, so it is often easier to add something new to `codex-core` rather than refactor out the library code you need so your new code neither takes a dependency on, nor contributes to the size of, `codex-core`.
To that end: **resist adding code to codex-core**!
Particularly when introducing a new concept/feature/API, before adding to `codex-core`, consider whether:
- There is an existing crate other than `codex-core` that is an appropriate place for your new code to live.
- It is time to introduce a new crate to the Cargo workspace for your new functionality. Refactor existing code as necessary to make this happen.
Likewise, when reviewing code, do not hesitate to push back on PRs that would unnecessarily add code to `codex-core`.
## TUI style conventions
See `codex-rs/tui/styles.md`.

View File

@@ -17,19 +17,12 @@ platform(
platform(
name = "local_windows",
constraint_values = [
# We just need to pick one of the ABIs. Do the same one we target.
"@rules_rs//rs/experimental/platforms/constraints:windows_gnullvm",
],
parents = ["@platforms//host"],
)
platform(
name = "local_windows_msvc",
constraint_values = [
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
parents = ["@platforms//host"],
)
alias(
name = "rbe",
actual = "@rbe_platform",

View File

@@ -1,139 +1,49 @@
module(name = "codex")
bazel_dep(name = "bazel_skylib", version = "1.8.2")
bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "llvm", version = "0.6.8")
# The upstream LLVM archive contains a few unix-only symlink entries and is
# missing a couple of MinGW compatibility archives that windows-gnullvm needs
# during extraction and linking, so patch it until upstream grows native support.
bazel_dep(name = "llvm", version = "0.6.1")
single_version_override(
module_name = "llvm",
patch_strip = 1,
patches = [
"//patches:llvm_windows_symlink_extract.patch",
],
)
# Abseil picks a MinGW pthread TLS path that does not match our hermetic
# windows-gnullvm toolchain; force it onto the portable C++11 thread-local path.
single_version_override(
module_name = "abseil-cpp",
patch_strip = 1,
patches = [
"//patches:abseil_windows_gnullvm_thread_identity.patch",
"//patches:toolchains_llvm_bootstrapped_resource_dir.patch",
],
)
register_toolchains("@llvm//toolchain:all")
osx = use_extension("@llvm//extensions:osx.bzl", "osx")
osx.from_archive(
sha256 = "1bde70c0b1c2ab89ff454acbebf6741390d7b7eb149ca2a3ca24cc9203a408b7",
strip_prefix = "Payload/Library/Developer/CommandLineTools/SDKs/MacOSX26.4.sdk",
type = "pkg",
urls = [
"https://swcdn.apple.com/content/downloads/32/53/047-96692-A_OAHIHT53YB/ybtshxmrcju8m2qvw3w5elr4rajtg1x3y3/CLTools_macOSNMOS_SDK.pkg",
],
)
osx.frameworks(names = [
"ApplicationServices",
"AppKit",
"ColorSync",
"CoreFoundation",
"CoreGraphics",
"CoreServices",
"CoreText",
"AudioToolbox",
"CFNetwork",
"FontServices",
"AudioUnit",
"CoreAudio",
"CoreAudioTypes",
"Foundation",
"ImageIO",
"IOKit",
"Kernel",
"OSLog",
"Security",
"SystemConfiguration",
])
osx.framework(name = "ApplicationServices")
osx.framework(name = "AppKit")
osx.framework(name = "ColorSync")
osx.framework(name = "CoreFoundation")
osx.framework(name = "CoreGraphics")
osx.framework(name = "CoreServices")
osx.framework(name = "CoreText")
osx.framework(name = "AudioToolbox")
osx.framework(name = "CFNetwork")
osx.framework(name = "FontServices")
osx.framework(name = "AudioUnit")
osx.framework(name = "CoreAudio")
osx.framework(name = "CoreAudioTypes")
osx.framework(name = "Foundation")
osx.framework(name = "ImageIO")
osx.framework(name = "IOKit")
osx.framework(name = "Kernel")
osx.framework(name = "OSLog")
osx.framework(name = "Security")
osx.framework(name = "SystemConfiguration")
use_repo(osx, "macos_sdk")
# Needed to disable xcode...
bazel_dep(name = "apple_support", version = "2.1.0")
bazel_dep(name = "rules_cc", version = "0.2.16")
bazel_dep(name = "rules_platform", version = "0.1.0")
bazel_dep(name = "rules_rs", version = "0.0.43")
# `rules_rs` 0.0.43 does not model `windows-gnullvm` as a distinct Windows exec
# platform, so patch it until upstream grows that support for both x86_64 and
# aarch64.
single_version_override(
module_name = "rules_rs",
patch_strip = 1,
patches = [
"//patches:rules_rs_windows_gnullvm_exec.patch",
],
version = "0.0.43",
)
bazel_dep(name = "rules_rs", version = "0.0.40")
rules_rust = use_extension("@rules_rs//rs/experimental:rules_rust.bzl", "rules_rust")
# Build-script probe binaries inherit CFLAGS/CXXFLAGS from Bazel's C++
# toolchain. On `windows-gnullvm`, llvm-mingw does not ship
# `libssp_nonshared`, so strip the forwarded stack-protector flags there.
rules_rust.patch(
patches = [
"//patches:rules_rust_windows_gnullvm_build_script.patch",
"//patches:rules_rust_windows_exec_msvc_build_script_env.patch",
"//patches:rules_rust_windows_bootstrap_process_wrapper_linker.patch",
"//patches:rules_rust_windows_msvc_direct_link_args.patch",
"//patches:rules_rust_windows_exec_bin_target.patch",
"//patches:rules_rust_windows_exec_std.patch",
"//patches:rules_rust_windows_exec_rustc_dev_rlib.patch",
"//patches:rules_rust_repository_set_exec_constraints.patch",
],
strip = 1,
)
use_repo(rules_rust, "rules_rust")
nightly_rust = use_extension(
"@rules_rs//rs/experimental:rules_rust_reexported_extensions.bzl",
"rust",
)
nightly_rust.toolchain(
versions = ["nightly/2025-09-18"],
dev_components = True,
edition = "2024",
)
# Keep Windows exec tools on MSVC so Bazel helper binaries link correctly, but
# lint crate targets as `windows-gnullvm` to preserve the repo's actual cfgs.
nightly_rust.repository_set(
name = "rust_windows_x86_64",
dev_components = True,
edition = "2024",
exec_triple = "x86_64-pc-windows-msvc",
exec_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
target_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_msvc",
],
target_triple = "x86_64-pc-windows-msvc",
versions = ["nightly/2025-09-18"],
)
nightly_rust.repository_set(
name = "rust_windows_x86_64",
target_compatible_with = [
"@platforms//cpu:x86_64",
"@platforms//os:windows",
"@rules_rs//rs/experimental/platforms/constraints:windows_gnullvm",
],
target_triple = "x86_64-pc-windows-gnullvm",
)
use_repo(nightly_rust, "rust_toolchains")
toolchains = use_extension("@rules_rs//rs/experimental/toolchains:module_extension.bzl", "toolchains")
toolchains.toolchain(
edition = "2024",
@@ -142,7 +52,6 @@ toolchains.toolchain(
use_repo(toolchains, "default_rust_toolchains")
register_toolchains("@default_rust_toolchains//:all")
register_toolchains("@rust_toolchains//:all")
crate = use_extension("@rules_rs//rs:extensions.bzl", "crate")
crate.from_cargo(
@@ -152,33 +61,10 @@ crate.from_cargo(
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
# Keep both Windows ABIs in the generated Cargo metadata: the V8
# experiment still consumes release assets that only exist under the
# MSVC names while targeting the GNU toolchain.
"aarch64-pc-windows-msvc",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-msvc",
"x86_64-pc-windows-gnullvm",
],
use_experimental_platforms = True,
)
crate.from_cargo(
name = "argument_comment_lint_crates",
cargo_lock = "//tools/argument-comment-lint:Cargo.lock",
cargo_toml = "//tools/argument-comment-lint:Cargo.toml",
platform_triples = [
"aarch64-unknown-linux-gnu",
"aarch64-unknown-linux-musl",
"aarch64-apple-darwin",
"aarch64-pc-windows-msvc",
"aarch64-pc-windows-gnullvm",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
"x86_64-apple-darwin",
"x86_64-pc-windows-msvc",
"x86_64-pc-windows-gnullvm",
],
use_experimental_platforms = True,
@@ -199,21 +85,13 @@ crate.annotation(
patch_args = ["-p1"],
patches = [
"//patches:aws-lc-sys_memcmp_check.patch",
"//patches:aws-lc-sys_windows_msvc_prebuilt_nasm.patch",
"//patches:aws-lc-sys_windows_msvc_memcmp_probe.patch",
],
)
crate.annotation(
# The build script only validates embedded source/version metadata.
crate = "rustc_apfloat",
gen_build_script = "off",
)
inject_repo(crate, "zstd")
use_repo(crate, "argument_comment_lint_crates")
bazel_dep(name = "bzip2", version = "1.0.8.bcr.3")
bazel_dep(name = "libcap", version = "2.27.bcr.1")
crate.annotation(
crate = "bzip2-sys",
@@ -262,35 +140,6 @@ crate.annotation(
workspace_cargo_toml = "rust/runfiles/Cargo.toml",
)
http_archive = use_repo_rule("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_file = use_repo_rule("@bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
new_local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "new_local_repository")
new_local_repository(
name = "v8_targets",
build_file = "//third_party/v8:BUILD.bazel",
path = "third_party/v8",
)
crate.annotation(
build_script_data = [
"@v8_targets//:rusty_v8_archive_for_target",
"@v8_targets//:rusty_v8_binding_for_target",
],
build_script_env = {
"RUSTY_V8_ARCHIVE": "$(execpath @v8_targets//:rusty_v8_archive_for_target)",
"RUSTY_V8_SRC_BINDING_PATH": "$(execpath @v8_targets//:rusty_v8_binding_for_target)",
},
crate = "v8",
gen_build_script = "on",
patch_args = ["-p1"],
patches = [
"//patches:rusty_v8_prebuilt_out_dir.patch",
],
)
inject_repo(crate, "v8_targets")
llvm = use_extension("@llvm//extensions:llvm.bzl", "llvm")
use_repo(llvm, "llvm-project")
@@ -300,13 +149,13 @@ crate.annotation(
"@macos_sdk//sysroot",
],
build_script_env = {
"BINDGEN_EXTRA_CLANG_ARGS": "-Xclang -internal-isystem -Xclang $(location @llvm//:builtin_resource_dir)/include",
"BINDGEN_EXTRA_CLANG_ARGS": "-isystem $(location @llvm//:builtin_headers)",
"COREAUDIO_SDK_PATH": "$(location @macos_sdk//sysroot)",
"LIBCLANG_PATH": "$(location @llvm-project//clang:libclang_interface_output)",
},
build_script_tools = [
"@llvm-project//clang:libclang_interface_output",
"@llvm//:builtin_resource_dir",
"@llvm//:builtin_headers",
],
crate = "coreaudio-sys",
gen_build_script = "on",
@@ -333,113 +182,8 @@ crate.annotation(
inject_repo(crate, "alsa_lib")
bazel_dep(name = "v8", version = "14.6.202.9")
archive_override(
module_name = "v8",
integrity = "sha256-JphDwLAzsd9KvgRZ7eQvNtPU6qGd3XjFt/a/1QITAJU=",
patch_strip = 3,
patches = [
"//patches:v8_module_deps.patch",
"//patches:v8_bazel_rules.patch",
"//patches:v8_source_portability.patch",
],
strip_prefix = "v8-14.6.202.9",
urls = ["https://github.com/v8/v8/archive/refs/tags/14.6.202.9.tar.gz"],
)
http_archive(
name = "v8_crate_146_4_0",
build_file = "//third_party/v8:v8_crate.BUILD.bazel",
sha256 = "d97bcac5cdc5a195a4813f1855a6bc658f240452aac36caa12fd6c6f16026ab1",
strip_prefix = "v8-146.4.0",
type = "tar.gz",
urls = ["https://static.crates.io/crates/v8/v8-146.4.0.crate"],
)
http_file(
name = "rusty_v8_146_4_0_aarch64_apple_darwin_archive",
downloaded_file_path = "librusty_v8_release_aarch64-apple-darwin.a.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/librusty_v8_release_aarch64-apple-darwin.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_aarch64_unknown_linux_gnu_archive",
downloaded_file_path = "librusty_v8_release_aarch64-unknown-linux-gnu.a.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/librusty_v8_release_aarch64-unknown-linux-gnu.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_aarch64_pc_windows_msvc_archive",
downloaded_file_path = "rusty_v8_release_aarch64-pc-windows-msvc.lib.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/rusty_v8_release_aarch64-pc-windows-msvc.lib.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_x86_64_apple_darwin_archive",
downloaded_file_path = "librusty_v8_release_x86_64-apple-darwin.a.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/librusty_v8_release_x86_64-apple-darwin.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_x86_64_unknown_linux_gnu_archive",
downloaded_file_path = "librusty_v8_release_x86_64-unknown-linux-gnu.a.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/librusty_v8_release_x86_64-unknown-linux-gnu.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_x86_64_pc_windows_msvc_archive",
downloaded_file_path = "rusty_v8_release_x86_64-pc-windows-msvc.lib.gz",
urls = [
"https://github.com/denoland/rusty_v8/releases/download/v146.4.0/rusty_v8_release_x86_64-pc-windows-msvc.lib.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_aarch64_unknown_linux_musl_archive",
downloaded_file_path = "librusty_v8_release_aarch64-unknown-linux-musl.a.gz",
urls = [
"https://github.com/openai/codex/releases/download/rusty-v8-v146.4.0/librusty_v8_release_aarch64-unknown-linux-musl.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_aarch64_unknown_linux_musl_binding",
downloaded_file_path = "src_binding_release_aarch64-unknown-linux-musl.rs",
urls = [
"https://github.com/openai/codex/releases/download/rusty-v8-v146.4.0/src_binding_release_aarch64-unknown-linux-musl.rs",
],
)
http_file(
name = "rusty_v8_146_4_0_x86_64_unknown_linux_musl_archive",
downloaded_file_path = "librusty_v8_release_x86_64-unknown-linux-musl.a.gz",
urls = [
"https://github.com/openai/codex/releases/download/rusty-v8-v146.4.0/librusty_v8_release_x86_64-unknown-linux-musl.a.gz",
],
)
http_file(
name = "rusty_v8_146_4_0_x86_64_unknown_linux_musl_binding",
downloaded_file_path = "src_binding_release_x86_64-unknown-linux-musl.rs",
urls = [
"https://github.com/openai/codex/releases/download/rusty-v8-v146.4.0/src_binding_release_x86_64-unknown-linux-musl.rs",
],
)
use_repo(crate, "crates")
bazel_dep(name = "libcap", version = "2.27.bcr.1")
rbe_platform_repository = use_repo_rule("//:rbe.bzl", "rbe_platform_repository")
rbe_platform_repository(

160
MODULE.bazel.lock generated

File diff suppressed because one or more lines are too long

View File

@@ -1,18 +1,3 @@
exports_files([
"clippy.toml",
"node-version.txt",
])
filegroup(
name = "workspace-files",
srcs = glob(
[
"*",
".cargo/**",
],
exclude = [
"BUILD.bazel",
],
),
visibility = ["//visibility:public"],
)

779
codex-rs/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,5 @@
[workspace]
members = [
"analytics",
"backend-client",
"ansi-escape",
"async-utils",
@@ -12,25 +11,19 @@ members = [
"apply-patch",
"arg0",
"feedback",
"features",
"codex-backend-openapi-models",
"code-mode",
"cloud-requirements",
"cloud-tasks",
"cloud-tasks-client",
"cli",
"connectors",
"config",
"shell-command",
"shell-escalation",
"skills",
"core",
"core-skills",
"hooks",
"instructions",
"secrets",
"exec",
"exec-server",
"execpolicy",
"execpolicy-legacy",
"keyring-store",
@@ -43,18 +36,14 @@ members = [
"ollama",
"process-hardening",
"protocol",
"rollout",
"rmcp-client",
"responses-api-proxy",
"sandboxing",
"stdio-to-uds",
"otel",
"tui",
"tools",
"v8-poc",
"utils/absolute-path",
"utils/cargo-bin",
"git-utils",
"utils/git",
"utils/cache",
"utils/image",
"utils/json-to-toml",
@@ -69,23 +58,20 @@ members = [
"utils/sleep-inhibitor",
"utils/approval-presets",
"utils/oss",
"utils/output-truncation",
"utils/path-utils",
"utils/plugins",
"utils/fuzzy-match",
"utils/stream-parser",
"utils/template",
"codex-client",
"codex-api",
"state",
"terminal-detection",
"codex-experimental-api-macros",
"plugin",
"test-macros",
"package-manager",
"artifacts",
]
resolver = "2"
[workspace.package]
version = "0.0.0"
version = "0.113.0"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024
@@ -97,9 +83,9 @@ license = "Apache-2.0"
# Internal
app_test_support = { path = "app-server/tests/common" }
codex-ansi-escape = { path = "ansi-escape" }
codex-analytics = { path = "analytics" }
codex-api = { path = "codex-api" }
codex-code-mode = { path = "code-mode" }
codex-artifacts = { path = "artifacts" }
codex-package-manager = { path = "package-manager" }
codex-app-server = { path = "app-server" }
codex-app-server-client = { path = "app-server-client" }
codex-app-server-protocol = { path = "app-server-protocol" }
@@ -112,20 +98,15 @@ codex-chatgpt = { path = "chatgpt" }
codex-cli = { path = "cli" }
codex-client = { path = "codex-client" }
codex-cloud-requirements = { path = "cloud-requirements" }
codex-connectors = { path = "connectors" }
codex-config = { path = "config" }
codex-core = { path = "core" }
codex-core-skills = { path = "core-skills" }
codex-exec = { path = "exec" }
codex-exec-server = { path = "exec-server" }
codex-execpolicy = { path = "execpolicy" }
codex-experimental-api-macros = { path = "codex-experimental-api-macros" }
codex-feedback = { path = "feedback" }
codex-features = { path = "features" }
codex-file-search = { path = "file-search" }
codex-git-utils = { path = "git-utils" }
codex-git = { path = "utils/git" }
codex-hooks = { path = "hooks" }
codex-instructions = { path = "instructions" }
codex-keyring-store = { path = "keyring-store" }
codex-linux-sandbox = { path = "linux-sandbox" }
codex-lmstudio = { path = "lmstudio" }
@@ -134,23 +115,18 @@ codex-mcp-server = { path = "mcp-server" }
codex-network-proxy = { path = "network-proxy" }
codex-ollama = { path = "ollama" }
codex-otel = { path = "otel" }
codex-plugin = { path = "plugin" }
codex-process-hardening = { path = "process-hardening" }
codex-protocol = { path = "protocol" }
codex-rollout = { path = "rollout" }
codex-responses-api-proxy = { path = "responses-api-proxy" }
codex-rmcp-client = { path = "rmcp-client" }
codex-sandboxing = { path = "sandboxing" }
codex-secrets = { path = "secrets" }
codex-shell-command = { path = "shell-command" }
codex-shell-escalation = { path = "shell-escalation" }
codex-skills = { path = "skills" }
codex-state = { path = "state" }
codex-stdio-to-uds = { path = "stdio-to-uds" }
codex-terminal-detection = { path = "terminal-detection" }
codex-tools = { path = "tools" }
codex-test-macros = { path = "test-macros" }
codex-tui = { path = "tui" }
codex-v8-poc = { path = "v8-poc" }
codex-utils-absolute-path = { path = "utils/absolute-path" }
codex-utils-approval-presets = { path = "utils/approval-presets" }
codex-utils-cache = { path = "utils/cache" }
@@ -162,16 +138,12 @@ codex-utils-home-dir = { path = "utils/home-dir" }
codex-utils-image = { path = "utils/image" }
codex-utils-json-to-toml = { path = "utils/json-to-toml" }
codex-utils-oss = { path = "utils/oss" }
codex-utils-output-truncation = { path = "utils/output-truncation" }
codex-utils-path = { path = "utils/path-utils" }
codex-utils-plugins = { path = "utils/plugins" }
codex-utils-pty = { path = "utils/pty" }
codex-utils-readiness = { path = "utils/readiness" }
codex-utils-rustls-provider = { path = "utils/rustls-provider" }
codex-utils-sandbox-summary = { path = "utils/sandbox-summary" }
codex-utils-sleep-inhibitor = { path = "utils/sleep-inhibitor" }
codex-utils-stream-parser = { path = "utils/stream-parser" }
codex-utils-template = { path = "utils/template" }
codex-utils-string = { path = "utils/string" }
codex-windows-sandbox = { path = "windows-sandbox-rs" }
core_test_support = { path = "core/tests/common" }
@@ -183,7 +155,7 @@ allocative = "0.3.3"
ansi-to-tui = "7.0.0"
anyhow = "1"
arboard = { version = "3", features = ["wayland-data-control"] }
arc-swap = "1.9.0"
askama = "0.15.4"
assert_cmd = "2"
assert_matches = "1.5.0"
async-channel = "2.3.1"
@@ -198,7 +170,6 @@ chrono = "0.4.43"
clap = "4"
clap_complete = "4"
color-eyre = "0.6.3"
constant_time_eq = "0.3.1"
crossbeam-channel = "0.5.15"
crossterm = "0.28.1"
csv = "1.3.1"
@@ -209,13 +180,14 @@ dirs = "6"
dotenvy = "0.15.7"
dunce = "1.0.4"
encoding_rs = "0.8.35"
fd-lock = "4.0.4"
env-flags = "0.1.1"
env_logger = "0.11.9"
eventsource-stream = "0.2.3"
flate2 = "1.1.4"
futures = { version = "0.3", default-features = false }
gethostname = "1.1.0"
globset = "0.4"
hmac = "0.12.1"
http = "1.3.1"
icu_decimal = "2.1"
icu_locale_core = "2.1"
@@ -228,7 +200,6 @@ indexmap = "2.12.0"
insta = "1.46.3"
inventory = "0.3.19"
itertools = "0.14.0"
jsonwebtoken = "9.3.1"
keyring = { version = "3.6", default-features = false }
landlock = "0.4.4"
lazy_static = "1"
@@ -255,7 +226,6 @@ portable-pty = "0.9.0"
predicates = "3"
pretty_assertions = "1.4.1"
pulldown-cmark = "0.10"
quick-xml = "0.38.4"
rand = "0.9"
ratatui = "0.29.0"
ratatui-macros = "0.6.0"
@@ -264,13 +234,10 @@ regex-lite = "0.1.8"
reqwest = "0.12"
rmcp = { version = "0.15.0", default-features = false }
runfiles = { git = "https://github.com/dzbarsky/rules_rust", rev = "b56cbaa8465e74127f1ea216f813cd377295ad81" }
v8 = "=146.4.0"
rustls = { version = "0.23", default-features = false, features = [
"ring",
"std",
] }
rustls-native-certs = "0.8.3"
rustls-pki-types = "1.14.0"
schemars = "0.8.22"
seccompiler = "0.5.0"
semver = "1.0"
@@ -303,6 +270,7 @@ supports-color = "3.0.2"
syntect = "5"
sys-locale = "0.3.2"
tempfile = "3.23.0"
tar = "0.4.44"
test-log = "0.2.19"
textwrap = "0.16.2"
thiserror = "2.0.17"
@@ -389,8 +357,7 @@ ignored = [
"icu_provider",
"openssl-sys",
"codex-utils-readiness",
"codex-utils-template",
"codex-v8-poc",
"codex-secrets"
]
[profile.release]

View File

@@ -50,7 +50,7 @@ You can enable notifications by configuring a script that is run whenever the ag
### `codex exec` to run Codex programmatically/non-interactively
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. If you provide both a prompt argument and piped stdin, Codex appends stdin as a `<stdin>` block after the prompt so patterns like `echo "my output" | codex exec "Summarize this concisely"` work naturally. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
To run Codex non-interactively, run `codex exec PROMPT` (you can also pass the prompt via `stdin`) and Codex will work on your task until it decides that it is done and exits. Output is printed to the terminal directly. You can set the `RUST_LOG` environment variable to see more about what's going on.
Use `codex exec --ephemeral ...` to run without persisting session rollout files to disk.
### Experimenting with the Codex Sandbox

View File

@@ -1,32 +0,0 @@
[package]
edition.workspace = true
license.workspace = true
name = "codex-analytics"
version.workspace = true
[lib]
doctest = false
name = "codex_analytics"
path = "src/lib.rs"
[lints]
workspace = true
[dependencies]
codex-app-server-protocol = { workspace = true }
codex-git-utils = { workspace = true }
codex-login = { workspace = true }
codex-plugin = { workspace = true }
codex-protocol = { workspace = true }
os_info = { workspace = true }
serde = { workspace = true, features = ["derive"] }
sha1 = { workspace = true }
tokio = { workspace = true, features = [
"macros",
"rt-multi-thread",
] }
tracing = { workspace = true, features = ["log"] }
[dev-dependencies]
pretty_assertions = { workspace = true }
serde_json = { workspace = true }

View File

@@ -1,688 +0,0 @@
use crate::client::AnalyticsEventsQueue;
use crate::events::AppServerRpcTransport;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
use crate::events::CodexAppUsedEventRequest;
use crate::events::CodexPluginEventRequest;
use crate::events::CodexPluginUsedEventRequest;
use crate::events::CodexRuntimeMetadata;
use crate::events::ThreadInitializationMode;
use crate::events::ThreadInitializedEvent;
use crate::events::ThreadInitializedEventParams;
use crate::events::TrackEventRequest;
use crate::events::codex_app_metadata;
use crate::events::codex_plugin_metadata;
use crate::events::codex_plugin_used_metadata;
use crate::facts::AnalyticsFact;
use crate::facts::AppInvocation;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::InvocationType;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::PluginUsedInput;
use crate::facts::SkillInvocation;
use crate::facts::SkillInvokedInput;
use crate::facts::TrackEventsContext;
use crate::reducer::AnalyticsReducer;
use crate::reducer::normalize_path_for_skill_id;
use crate::reducer::skill_id_for_local_skill;
use codex_app_server_protocol::ApprovalsReviewer as AppServerApprovalsReviewer;
use codex_app_server_protocol::AskForApproval as AppServerAskForApproval;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::SandboxPolicy as AppServerSandboxPolicy;
use codex_app_server_protocol::SessionSource as AppServerSessionSource;
use codex_app_server_protocol::Thread;
use codex_app_server_protocol::ThreadResumeResponse;
use codex_app_server_protocol::ThreadStartResponse;
use codex_app_server_protocol::ThreadStatus as AppServerThreadStatus;
use codex_login::default_client::DEFAULT_ORIGINATOR;
use codex_login::default_client::originator;
use codex_plugin::AppConnectorId;
use codex_plugin::PluginCapabilitySummary;
use codex_plugin::PluginId;
use codex_plugin::PluginTelemetryMetadata;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::Mutex;
use tokio::sync::mpsc;
fn sample_thread(thread_id: &str, ephemeral: bool) -> Thread {
Thread {
id: thread_id.to_string(),
preview: "first prompt".to_string(),
ephemeral,
model_provider: "openai".to_string(),
created_at: 1,
updated_at: 2,
status: AppServerThreadStatus::Idle,
path: None,
cwd: PathBuf::from("/tmp"),
cli_version: "0.0.0".to_string(),
source: AppServerSessionSource::Exec,
agent_nickname: None,
agent_role: None,
git_info: None,
name: None,
turns: Vec::new(),
}
}
fn sample_thread_start_response(thread_id: &str, ephemeral: bool, model: &str) -> ClientResponse {
ClientResponse::ThreadStart {
request_id: RequestId::Integer(1),
response: ThreadStartResponse {
thread: sample_thread(thread_id, ephemeral),
model: model.to_string(),
model_provider: "openai".to_string(),
service_tier: None,
cwd: PathBuf::from("/tmp"),
approval_policy: AppServerAskForApproval::OnFailure,
approvals_reviewer: AppServerApprovalsReviewer::User,
sandbox: AppServerSandboxPolicy::DangerFullAccess,
reasoning_effort: None,
},
}
}
fn sample_thread_resume_response(thread_id: &str, ephemeral: bool, model: &str) -> ClientResponse {
ClientResponse::ThreadResume {
request_id: RequestId::Integer(2),
response: ThreadResumeResponse {
thread: sample_thread(thread_id, ephemeral),
model: model.to_string(),
model_provider: "openai".to_string(),
service_tier: None,
cwd: PathBuf::from("/tmp"),
approval_policy: AppServerAskForApproval::OnFailure,
approvals_reviewer: AppServerApprovalsReviewer::User,
sandbox: AppServerSandboxPolicy::DangerFullAccess,
reasoning_effort: None,
},
}
}
fn expected_absolute_path(path: &PathBuf) -> String {
std::fs::canonicalize(path)
.unwrap_or_else(|_| path.to_path_buf())
.to_string_lossy()
.replace('\\', "/")
}
#[test]
fn normalize_path_for_skill_id_repo_scoped_uses_relative_path() {
let repo_root = PathBuf::from("/repo/root");
let skill_path = PathBuf::from("/repo/root/.codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(
Some("https://example.com/repo.git"),
Some(repo_root.as_path()),
skill_path.as_path(),
);
assert_eq!(path, ".codex/skills/doc/SKILL.md");
}
#[test]
fn normalize_path_for_skill_id_user_scoped_uses_absolute_path() {
let skill_path = PathBuf::from("/Users/abc/.codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
);
let expected = expected_absolute_path(&skill_path);
assert_eq!(path, expected);
}
#[test]
fn normalize_path_for_skill_id_admin_scoped_uses_absolute_path() {
let skill_path = PathBuf::from("/etc/codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
);
let expected = expected_absolute_path(&skill_path);
assert_eq!(path, expected);
}
#[test]
fn normalize_path_for_skill_id_repo_root_not_in_skill_path_uses_absolute_path() {
let repo_root = PathBuf::from("/repo/root");
let skill_path = PathBuf::from("/other/path/.codex/skills/doc/SKILL.md");
let path = normalize_path_for_skill_id(
Some("https://example.com/repo.git"),
Some(repo_root.as_path()),
skill_path.as_path(),
);
let expected = expected_absolute_path(&skill_path);
assert_eq!(path, expected);
}
#[test]
fn app_mentioned_event_serializes_expected_shape() {
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
let event = TrackEventRequest::AppMentioned(CodexAppMentionedEventRequest {
event_type: "codex_app_mentioned",
event_params: codex_app_metadata(
&tracking,
AppInvocation {
connector_id: Some("calendar".to_string()),
app_name: Some("Calendar".to_string()),
invocation_type: Some(InvocationType::Explicit),
},
),
});
let payload = serde_json::to_value(&event).expect("serialize app mentioned event");
assert_eq!(
payload,
json!({
"event_type": "codex_app_mentioned",
"event_params": {
"connector_id": "calendar",
"thread_id": "thread-1",
"turn_id": "turn-1",
"app_name": "Calendar",
"product_client_id": originator().value,
"invoke_type": "explicit",
"model_slug": "gpt-5"
}
})
);
}
#[test]
fn app_used_event_serializes_expected_shape() {
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-2".to_string(),
turn_id: "turn-2".to_string(),
};
let event = TrackEventRequest::AppUsed(CodexAppUsedEventRequest {
event_type: "codex_app_used",
event_params: codex_app_metadata(
&tracking,
AppInvocation {
connector_id: Some("drive".to_string()),
app_name: Some("Google Drive".to_string()),
invocation_type: Some(InvocationType::Implicit),
},
),
});
let payload = serde_json::to_value(&event).expect("serialize app used event");
assert_eq!(
payload,
json!({
"event_type": "codex_app_used",
"event_params": {
"connector_id": "drive",
"thread_id": "thread-2",
"turn_id": "turn-2",
"app_name": "Google Drive",
"product_client_id": originator().value,
"invoke_type": "implicit",
"model_slug": "gpt-5"
}
})
);
}
#[test]
fn app_used_dedupe_is_keyed_by_turn_and_connector() {
let (sender, _receiver) = mpsc::channel(1);
let queue = AnalyticsEventsQueue {
sender,
app_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
plugin_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
};
let app = AppInvocation {
connector_id: Some("calendar".to_string()),
app_name: Some("Calendar".to_string()),
invocation_type: Some(InvocationType::Implicit),
};
let turn_1 = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
let turn_2 = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-2".to_string(),
};
assert_eq!(queue.should_enqueue_app_used(&turn_1, &app), true);
assert_eq!(queue.should_enqueue_app_used(&turn_1, &app), false);
assert_eq!(queue.should_enqueue_app_used(&turn_2, &app), true);
}
#[test]
fn thread_initialized_event_serializes_expected_shape() {
let event = TrackEventRequest::ThreadInitialized(ThreadInitializedEvent {
event_type: "codex_thread_initialized",
event_params: ThreadInitializedEventParams {
thread_id: "thread-0".to_string(),
app_server_client: CodexAppServerClientMetadata {
product_client_id: DEFAULT_ORIGINATOR.to_string(),
client_name: Some("codex-tui".to_string()),
client_version: Some("1.0.0".to_string()),
rpc_transport: AppServerRpcTransport::Stdio,
experimental_api_enabled: Some(true),
},
runtime: CodexRuntimeMetadata {
codex_rs_version: "0.1.0".to_string(),
runtime_os: "macos".to_string(),
runtime_os_version: "15.3.1".to_string(),
runtime_arch: "aarch64".to_string(),
},
model: "gpt-5".to_string(),
ephemeral: true,
thread_source: Some("user"),
initialization_mode: ThreadInitializationMode::New,
subagent_source: None,
parent_thread_id: None,
created_at: 1,
},
});
let payload = serde_json::to_value(&event).expect("serialize thread initialized event");
assert_eq!(
payload,
json!({
"event_type": "codex_thread_initialized",
"event_params": {
"thread_id": "thread-0",
"app_server_client": {
"product_client_id": DEFAULT_ORIGINATOR,
"client_name": "codex-tui",
"client_version": "1.0.0",
"rpc_transport": "stdio",
"experimental_api_enabled": true
},
"runtime": {
"codex_rs_version": "0.1.0",
"runtime_os": "macos",
"runtime_os_version": "15.3.1",
"runtime_arch": "aarch64"
},
"model": "gpt-5",
"ephemeral": true,
"thread_source": "user",
"initialization_mode": "new",
"subagent_source": null,
"parent_thread_id": null,
"created_at": 1
}
})
);
}
#[tokio::test]
async fn initialize_caches_client_and_thread_lifecycle_publishes_once_initialized() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
reducer
.ingest(
AnalyticsFact::Response {
connection_id: 7,
response: Box::new(sample_thread_start_response(
"thread-no-client",
/*ephemeral*/ false,
"gpt-5",
)),
},
&mut events,
)
.await;
assert!(events.is_empty(), "thread events should require initialize");
reducer
.ingest(
AnalyticsFact::Initialize {
connection_id: 7,
params: InitializeParams {
client_info: ClientInfo {
name: "codex-tui".to_string(),
title: None,
version: "1.0.0".to_string(),
},
capabilities: Some(InitializeCapabilities {
experimental_api: false,
opt_out_notification_methods: None,
}),
},
product_client_id: DEFAULT_ORIGINATOR.to_string(),
runtime: CodexRuntimeMetadata {
codex_rs_version: "0.99.0".to_string(),
runtime_os: "linux".to_string(),
runtime_os_version: "24.04".to_string(),
runtime_arch: "x86_64".to_string(),
},
rpc_transport: AppServerRpcTransport::Websocket,
},
&mut events,
)
.await;
assert!(events.is_empty(), "initialize should not publish by itself");
reducer
.ingest(
AnalyticsFact::Response {
connection_id: 7,
response: Box::new(sample_thread_resume_response(
"thread-1", /*ephemeral*/ true, "gpt-5",
)),
},
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(payload.as_array().expect("events array").len(), 1);
assert_eq!(payload[0]["event_type"], "codex_thread_initialized");
assert_eq!(
payload[0]["event_params"]["app_server_client"]["product_client_id"],
DEFAULT_ORIGINATOR
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["client_name"],
"codex-tui"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["client_version"],
"1.0.0"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["rpc_transport"],
"websocket"
);
assert_eq!(
payload[0]["event_params"]["app_server_client"]["experimental_api_enabled"],
false
);
assert_eq!(
payload[0]["event_params"]["runtime"]["codex_rs_version"],
"0.99.0"
);
assert_eq!(payload[0]["event_params"]["runtime"]["runtime_os"], "linux");
assert_eq!(
payload[0]["event_params"]["runtime"]["runtime_os_version"],
"24.04"
);
assert_eq!(
payload[0]["event_params"]["runtime"]["runtime_arch"],
"x86_64"
);
assert_eq!(payload[0]["event_params"]["initialization_mode"], "resumed");
assert_eq!(payload[0]["event_params"]["thread_source"], "user");
assert_eq!(payload[0]["event_params"]["subagent_source"], json!(null));
assert_eq!(payload[0]["event_params"]["parent_thread_id"], json!(null));
}
#[test]
fn plugin_used_event_serializes_expected_shape() {
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-3".to_string(),
turn_id: "turn-3".to_string(),
};
let event = TrackEventRequest::PluginUsed(CodexPluginUsedEventRequest {
event_type: "codex_plugin_used",
event_params: codex_plugin_used_metadata(&tracking, sample_plugin_metadata()),
});
let payload = serde_json::to_value(&event).expect("serialize plugin used event");
assert_eq!(
payload,
json!({
"event_type": "codex_plugin_used",
"event_params": {
"plugin_id": "sample@test",
"plugin_name": "sample",
"marketplace_name": "test",
"has_skills": true,
"mcp_server_count": 2,
"connector_ids": ["calendar", "drive"],
"product_client_id": originator().value,
"thread_id": "thread-3",
"turn_id": "turn-3",
"model_slug": "gpt-5"
}
})
);
}
#[test]
fn plugin_management_event_serializes_expected_shape() {
let event = TrackEventRequest::PluginInstalled(CodexPluginEventRequest {
event_type: "codex_plugin_installed",
event_params: codex_plugin_metadata(sample_plugin_metadata()),
});
let payload = serde_json::to_value(&event).expect("serialize plugin installed event");
assert_eq!(
payload,
json!({
"event_type": "codex_plugin_installed",
"event_params": {
"plugin_id": "sample@test",
"plugin_name": "sample",
"marketplace_name": "test",
"has_skills": true,
"mcp_server_count": 2,
"connector_ids": ["calendar", "drive"],
"product_client_id": originator().value
}
})
);
}
#[test]
fn plugin_used_dedupe_is_keyed_by_turn_and_plugin() {
let (sender, _receiver) = mpsc::channel(1);
let queue = AnalyticsEventsQueue {
sender,
app_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
plugin_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
};
let plugin = sample_plugin_metadata();
let turn_1 = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
let turn_2 = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-2".to_string(),
};
assert_eq!(queue.should_enqueue_plugin_used(&turn_1, &plugin), true);
assert_eq!(queue.should_enqueue_plugin_used(&turn_1, &plugin), false);
assert_eq!(queue.should_enqueue_plugin_used(&turn_2, &plugin), true);
}
#[tokio::test]
async fn reducer_ingests_skill_invoked_fact() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
let skill_path = PathBuf::from("/Users/abc/.codex/skills/doc/SKILL.md");
let expected_skill_id = skill_id_for_local_skill(
/*repo_url*/ None,
/*repo_root*/ None,
skill_path.as_path(),
"doc",
);
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::SkillInvoked(SkillInvokedInput {
tracking,
invocations: vec![SkillInvocation {
skill_name: "doc".to_string(),
skill_scope: codex_protocol::protocol::SkillScope::User,
skill_path,
invocation_type: InvocationType::Explicit,
}],
})),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(
payload,
json!([{
"event_type": "skill_invocation",
"skill_id": expected_skill_id,
"skill_name": "doc",
"event_params": {
"product_client_id": originator().value,
"skill_scope": "user",
"repo_url": null,
"thread_id": "thread-1",
"invoke_type": "explicit",
"model_slug": "gpt-5"
}
}])
);
}
#[tokio::test]
async fn reducer_ingests_app_and_plugin_facts() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
let tracking = TrackEventsContext {
model_slug: "gpt-5".to_string(),
thread_id: "thread-1".to_string(),
turn_id: "turn-1".to_string(),
};
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::AppMentioned(AppMentionedInput {
tracking: tracking.clone(),
mentions: vec![AppInvocation {
connector_id: Some("calendar".to_string()),
app_name: Some("Calendar".to_string()),
invocation_type: Some(InvocationType::Explicit),
}],
})),
&mut events,
)
.await;
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::AppUsed(AppUsedInput {
tracking: tracking.clone(),
app: AppInvocation {
connector_id: Some("drive".to_string()),
app_name: Some("Drive".to_string()),
invocation_type: Some(InvocationType::Implicit),
},
})),
&mut events,
)
.await;
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::PluginUsed(PluginUsedInput {
tracking,
plugin: sample_plugin_metadata(),
})),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(payload.as_array().expect("events array").len(), 3);
assert_eq!(payload[0]["event_type"], "codex_app_mentioned");
assert_eq!(payload[1]["event_type"], "codex_app_used");
assert_eq!(payload[2]["event_type"], "codex_plugin_used");
}
#[tokio::test]
async fn reducer_ingests_plugin_state_changed_fact() {
let mut reducer = AnalyticsReducer::default();
let mut events = Vec::new();
reducer
.ingest(
AnalyticsFact::Custom(CustomAnalyticsFact::PluginStateChanged(
PluginStateChangedInput {
plugin: sample_plugin_metadata(),
state: PluginState::Disabled,
},
)),
&mut events,
)
.await;
let payload = serde_json::to_value(&events).expect("serialize events");
assert_eq!(
payload,
json!([{
"event_type": "codex_plugin_disabled",
"event_params": {
"plugin_id": "sample@test",
"plugin_name": "sample",
"marketplace_name": "test",
"has_skills": true,
"mcp_server_count": 2,
"connector_ids": ["calendar", "drive"],
"product_client_id": originator().value
}
}])
);
}
fn sample_plugin_metadata() -> PluginTelemetryMetadata {
PluginTelemetryMetadata {
plugin_id: PluginId::parse("sample@test").expect("valid plugin id"),
capability_summary: Some(PluginCapabilitySummary {
config_name: "sample@test".to_string(),
display_name: "sample".to_string(),
description: None,
has_skills: true,
mcp_server_names: vec!["mcp-1".to_string(), "mcp-2".to_string()],
app_connector_ids: vec![
AppConnectorId("calendar".to_string()),
AppConnectorId("drive".to_string()),
],
}),
}
}

View File

@@ -1,272 +0,0 @@
use crate::events::AppServerRpcTransport;
use crate::events::TrackEventRequest;
use crate::events::TrackEventsRequest;
use crate::events::current_runtime_metadata;
use crate::facts::AnalyticsFact;
use crate::facts::AppInvocation;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::SkillInvocation;
use crate::facts::SkillInvokedInput;
use crate::facts::TrackEventsContext;
use crate::reducer::AnalyticsReducer;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_login::AuthManager;
use codex_login::default_client::create_client;
use codex_plugin::PluginTelemetryMetadata;
use std::collections::HashSet;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use tokio::sync::mpsc;
const ANALYTICS_EVENTS_QUEUE_SIZE: usize = 256;
const ANALYTICS_EVENTS_TIMEOUT: Duration = Duration::from_secs(10);
const ANALYTICS_EVENT_DEDUPE_MAX_KEYS: usize = 4096;
#[derive(Clone)]
pub(crate) struct AnalyticsEventsQueue {
pub(crate) sender: mpsc::Sender<AnalyticsFact>,
pub(crate) app_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
pub(crate) plugin_used_emitted_keys: Arc<Mutex<HashSet<(String, String)>>>,
}
#[derive(Clone)]
pub struct AnalyticsEventsClient {
queue: AnalyticsEventsQueue,
analytics_enabled: Option<bool>,
}
impl AnalyticsEventsQueue {
pub(crate) fn new(auth_manager: Arc<AuthManager>, base_url: String) -> Self {
let (sender, mut receiver) = mpsc::channel(ANALYTICS_EVENTS_QUEUE_SIZE);
tokio::spawn(async move {
let mut reducer = AnalyticsReducer::default();
while let Some(input) = receiver.recv().await {
let mut events = Vec::new();
reducer.ingest(input, &mut events).await;
send_track_events(&auth_manager, &base_url, events).await;
}
});
Self {
sender,
app_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
plugin_used_emitted_keys: Arc::new(Mutex::new(HashSet::new())),
}
}
fn try_send(&self, input: AnalyticsFact) {
if self.sender.try_send(input).is_err() {
//TODO: add a metric for this
tracing::warn!("dropping analytics events: queue is full");
}
}
pub(crate) fn should_enqueue_app_used(
&self,
tracking: &TrackEventsContext,
app: &AppInvocation,
) -> bool {
let Some(connector_id) = app.connector_id.as_ref() else {
return true;
};
let mut emitted = self
.app_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), connector_id.clone()))
}
pub(crate) fn should_enqueue_plugin_used(
&self,
tracking: &TrackEventsContext,
plugin: &PluginTelemetryMetadata,
) -> bool {
let mut emitted = self
.plugin_used_emitted_keys
.lock()
.unwrap_or_else(std::sync::PoisonError::into_inner);
if emitted.len() >= ANALYTICS_EVENT_DEDUPE_MAX_KEYS {
emitted.clear();
}
emitted.insert((tracking.turn_id.clone(), plugin.plugin_id.as_key()))
}
}
impl AnalyticsEventsClient {
pub fn new(
auth_manager: Arc<AuthManager>,
base_url: String,
analytics_enabled: Option<bool>,
) -> Self {
Self {
queue: AnalyticsEventsQueue::new(Arc::clone(&auth_manager), base_url),
analytics_enabled,
}
}
pub fn track_skill_invocations(
&self,
tracking: TrackEventsContext,
invocations: Vec<SkillInvocation>,
) {
if invocations.is_empty() {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::SkillInvoked(
SkillInvokedInput {
tracking,
invocations,
},
)));
}
pub fn track_initialize(
&self,
connection_id: u64,
params: InitializeParams,
product_client_id: String,
rpc_transport: AppServerRpcTransport,
) {
self.record_fact(AnalyticsFact::Initialize {
connection_id,
params,
product_client_id,
runtime: current_runtime_metadata(),
rpc_transport,
});
}
pub fn track_app_mentioned(&self, tracking: TrackEventsContext, mentions: Vec<AppInvocation>) {
if mentions.is_empty() {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::AppMentioned(
AppMentionedInput { tracking, mentions },
)));
}
pub fn track_app_used(&self, tracking: TrackEventsContext, app: AppInvocation) {
if !self.queue.should_enqueue_app_used(&tracking, &app) {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::AppUsed(
AppUsedInput { tracking, app },
)));
}
pub fn track_plugin_used(&self, tracking: TrackEventsContext, plugin: PluginTelemetryMetadata) {
if !self.queue.should_enqueue_plugin_used(&tracking, &plugin) {
return;
}
self.record_fact(AnalyticsFact::Custom(CustomAnalyticsFact::PluginUsed(
crate::facts::PluginUsedInput { tracking, plugin },
)));
}
pub fn track_plugin_installed(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Installed,
}),
));
}
pub fn track_plugin_uninstalled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Uninstalled,
}),
));
}
pub fn track_plugin_enabled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Enabled,
}),
));
}
pub fn track_plugin_disabled(&self, plugin: PluginTelemetryMetadata) {
self.record_fact(AnalyticsFact::Custom(
CustomAnalyticsFact::PluginStateChanged(PluginStateChangedInput {
plugin,
state: PluginState::Disabled,
}),
));
}
pub(crate) fn record_fact(&self, input: AnalyticsFact) {
if self.analytics_enabled == Some(false) {
return;
}
self.queue.try_send(input);
}
pub fn track_response(&self, connection_id: u64, response: ClientResponse) {
self.record_fact(AnalyticsFact::Response {
connection_id,
response: Box::new(response),
});
}
}
async fn send_track_events(
auth_manager: &AuthManager,
base_url: &str,
events: Vec<TrackEventRequest>,
) {
if events.is_empty() {
return;
}
let Some(auth) = auth_manager.auth().await else {
return;
};
if !auth.is_chatgpt_auth() {
return;
}
let access_token = match auth.get_token() {
Ok(token) => token,
Err(_) => return,
};
let Some(account_id) = auth.get_account_id() else {
return;
};
let base_url = base_url.trim_end_matches('/');
let url = format!("{base_url}/codex/analytics-events/events");
let payload = TrackEventsRequest { events };
let response = create_client()
.post(&url)
.timeout(ANALYTICS_EVENTS_TIMEOUT)
.bearer_auth(&access_token)
.header("chatgpt-account-id", &account_id)
.header("Content-Type", "application/json")
.json(&payload)
.send()
.await;
match response {
Ok(response) if response.status().is_success() => {}
Ok(response) => {
let status = response.status();
let body = response.text().await.unwrap_or_default();
tracing::warn!("events failed with status {status}: {body}");
}
Err(err) => {
tracing::warn!("failed to send events request: {err}");
}
}
}

View File

@@ -1,230 +0,0 @@
use crate::facts::AppInvocation;
use crate::facts::InvocationType;
use crate::facts::PluginState;
use crate::facts::TrackEventsContext;
use codex_login::default_client::originator;
use codex_plugin::PluginTelemetryMetadata;
use codex_protocol::protocol::SessionSource;
use serde::Serialize;
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub enum AppServerRpcTransport {
Stdio,
Websocket,
InProcess,
}
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "snake_case")]
pub(crate) enum ThreadInitializationMode {
New,
Forked,
Resumed,
}
#[derive(Serialize)]
pub(crate) struct TrackEventsRequest {
pub(crate) events: Vec<TrackEventRequest>,
}
#[derive(Serialize)]
#[serde(untagged)]
pub(crate) enum TrackEventRequest {
SkillInvocation(SkillInvocationEventRequest),
ThreadInitialized(ThreadInitializedEvent),
AppMentioned(CodexAppMentionedEventRequest),
AppUsed(CodexAppUsedEventRequest),
PluginUsed(CodexPluginUsedEventRequest),
PluginInstalled(CodexPluginEventRequest),
PluginUninstalled(CodexPluginEventRequest),
PluginEnabled(CodexPluginEventRequest),
PluginDisabled(CodexPluginEventRequest),
}
#[derive(Serialize)]
pub(crate) struct SkillInvocationEventRequest {
pub(crate) event_type: &'static str,
pub(crate) skill_id: String,
pub(crate) skill_name: String,
pub(crate) event_params: SkillInvocationEventParams,
}
#[derive(Serialize)]
pub(crate) struct SkillInvocationEventParams {
pub(crate) product_client_id: Option<String>,
pub(crate) skill_scope: Option<String>,
pub(crate) repo_url: Option<String>,
pub(crate) thread_id: Option<String>,
pub(crate) invoke_type: Option<InvocationType>,
pub(crate) model_slug: Option<String>,
}
#[derive(Clone, Serialize)]
pub(crate) struct CodexAppServerClientMetadata {
pub(crate) product_client_id: String,
pub(crate) client_name: Option<String>,
pub(crate) client_version: Option<String>,
pub(crate) rpc_transport: AppServerRpcTransport,
pub(crate) experimental_api_enabled: Option<bool>,
}
#[derive(Clone, Serialize)]
pub(crate) struct CodexRuntimeMetadata {
pub(crate) codex_rs_version: String,
pub(crate) runtime_os: String,
pub(crate) runtime_os_version: String,
pub(crate) runtime_arch: String,
}
#[derive(Serialize)]
pub(crate) struct ThreadInitializedEventParams {
pub(crate) thread_id: String,
pub(crate) app_server_client: CodexAppServerClientMetadata,
pub(crate) runtime: CodexRuntimeMetadata,
pub(crate) model: String,
pub(crate) ephemeral: bool,
pub(crate) thread_source: Option<&'static str>,
pub(crate) initialization_mode: ThreadInitializationMode,
pub(crate) subagent_source: Option<String>,
pub(crate) parent_thread_id: Option<String>,
pub(crate) created_at: u64,
}
#[derive(Serialize)]
pub(crate) struct ThreadInitializedEvent {
pub(crate) event_type: &'static str,
pub(crate) event_params: ThreadInitializedEventParams,
}
#[derive(Serialize)]
pub(crate) struct CodexAppMetadata {
pub(crate) connector_id: Option<String>,
pub(crate) thread_id: Option<String>,
pub(crate) turn_id: Option<String>,
pub(crate) app_name: Option<String>,
pub(crate) product_client_id: Option<String>,
pub(crate) invoke_type: Option<InvocationType>,
pub(crate) model_slug: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexAppMentionedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexAppMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexAppUsedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexAppMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginMetadata {
pub(crate) plugin_id: Option<String>,
pub(crate) plugin_name: Option<String>,
pub(crate) marketplace_name: Option<String>,
pub(crate) has_skills: Option<bool>,
pub(crate) mcp_server_count: Option<usize>,
pub(crate) connector_ids: Option<Vec<String>>,
pub(crate) product_client_id: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginUsedMetadata {
#[serde(flatten)]
pub(crate) plugin: CodexPluginMetadata,
pub(crate) thread_id: Option<String>,
pub(crate) turn_id: Option<String>,
pub(crate) model_slug: Option<String>,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexPluginMetadata,
}
#[derive(Serialize)]
pub(crate) struct CodexPluginUsedEventRequest {
pub(crate) event_type: &'static str,
pub(crate) event_params: CodexPluginUsedMetadata,
}
pub(crate) fn plugin_state_event_type(state: PluginState) -> &'static str {
match state {
PluginState::Installed => "codex_plugin_installed",
PluginState::Uninstalled => "codex_plugin_uninstalled",
PluginState::Enabled => "codex_plugin_enabled",
PluginState::Disabled => "codex_plugin_disabled",
}
}
pub(crate) fn codex_app_metadata(
tracking: &TrackEventsContext,
app: AppInvocation,
) -> CodexAppMetadata {
CodexAppMetadata {
connector_id: app.connector_id,
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
app_name: app.app_name,
product_client_id: Some(originator().value),
invoke_type: app.invocation_type,
model_slug: Some(tracking.model_slug.clone()),
}
}
pub(crate) fn codex_plugin_metadata(plugin: PluginTelemetryMetadata) -> CodexPluginMetadata {
let capability_summary = plugin.capability_summary;
CodexPluginMetadata {
plugin_id: Some(plugin.plugin_id.as_key()),
plugin_name: Some(plugin.plugin_id.plugin_name),
marketplace_name: Some(plugin.plugin_id.marketplace_name),
has_skills: capability_summary
.as_ref()
.map(|summary| summary.has_skills),
mcp_server_count: capability_summary
.as_ref()
.map(|summary| summary.mcp_server_names.len()),
connector_ids: capability_summary.map(|summary| {
summary
.app_connector_ids
.into_iter()
.map(|connector_id| connector_id.0)
.collect()
}),
product_client_id: Some(originator().value),
}
}
pub(crate) fn codex_plugin_used_metadata(
tracking: &TrackEventsContext,
plugin: PluginTelemetryMetadata,
) -> CodexPluginUsedMetadata {
CodexPluginUsedMetadata {
plugin: codex_plugin_metadata(plugin),
thread_id: Some(tracking.thread_id.clone()),
turn_id: Some(tracking.turn_id.clone()),
model_slug: Some(tracking.model_slug.clone()),
}
}
pub(crate) fn thread_source_name(thread_source: &SessionSource) -> Option<&'static str> {
match thread_source {
SessionSource::Cli | SessionSource::VSCode | SessionSource::Exec => Some("user"),
SessionSource::SubAgent(_) => Some("subagent"),
SessionSource::Mcp | SessionSource::Custom(_) | SessionSource::Unknown => None,
}
}
pub(crate) fn current_runtime_metadata() -> CodexRuntimeMetadata {
let os_info = os_info::get();
CodexRuntimeMetadata {
codex_rs_version: env!("CARGO_PKG_VERSION").to_string(),
runtime_os: std::env::consts::OS.to_string(),
runtime_os_version: os_info.version().to_string(),
runtime_arch: std::env::consts::ARCH.to_string(),
}
}

View File

@@ -1,116 +0,0 @@
use crate::events::AppServerRpcTransport;
use crate::events::CodexRuntimeMetadata;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::ServerNotification;
use codex_plugin::PluginTelemetryMetadata;
use codex_protocol::protocol::SkillScope;
use serde::Serialize;
use std::path::PathBuf;
#[derive(Clone)]
pub struct TrackEventsContext {
pub model_slug: String,
pub thread_id: String,
pub turn_id: String,
}
pub fn build_track_events_context(
model_slug: String,
thread_id: String,
turn_id: String,
) -> TrackEventsContext {
TrackEventsContext {
model_slug,
thread_id,
turn_id,
}
}
#[derive(Clone, Debug)]
pub struct SkillInvocation {
pub skill_name: String,
pub skill_scope: SkillScope,
pub skill_path: PathBuf,
pub invocation_type: InvocationType,
}
#[derive(Clone, Copy, Debug, Serialize)]
#[serde(rename_all = "lowercase")]
pub enum InvocationType {
Explicit,
Implicit,
}
pub struct AppInvocation {
pub connector_id: Option<String>,
pub app_name: Option<String>,
pub invocation_type: Option<InvocationType>,
}
#[allow(dead_code)]
pub(crate) enum AnalyticsFact {
Initialize {
connection_id: u64,
params: InitializeParams,
product_client_id: String,
runtime: CodexRuntimeMetadata,
rpc_transport: AppServerRpcTransport,
},
Request {
connection_id: u64,
request_id: RequestId,
request: Box<ClientRequest>,
},
Response {
connection_id: u64,
response: Box<ClientResponse>,
},
Notification(Box<ServerNotification>),
// Facts that do not naturally exist on the app-server protocol surface, or
// would require non-trivial protocol reshaping on this branch.
Custom(CustomAnalyticsFact),
}
pub(crate) enum CustomAnalyticsFact {
SkillInvoked(SkillInvokedInput),
AppMentioned(AppMentionedInput),
AppUsed(AppUsedInput),
PluginUsed(PluginUsedInput),
PluginStateChanged(PluginStateChangedInput),
}
pub(crate) struct SkillInvokedInput {
pub tracking: TrackEventsContext,
pub invocations: Vec<SkillInvocation>,
}
pub(crate) struct AppMentionedInput {
pub tracking: TrackEventsContext,
pub mentions: Vec<AppInvocation>,
}
pub(crate) struct AppUsedInput {
pub tracking: TrackEventsContext,
pub app: AppInvocation,
}
pub(crate) struct PluginUsedInput {
pub tracking: TrackEventsContext,
pub plugin: PluginTelemetryMetadata,
}
pub(crate) struct PluginStateChangedInput {
pub plugin: PluginTelemetryMetadata,
pub state: PluginState,
}
#[derive(Clone, Copy)]
pub(crate) enum PluginState {
Installed,
Uninstalled,
Enabled,
Disabled,
}

View File

@@ -1,15 +0,0 @@
mod client;
mod events;
mod facts;
mod reducer;
pub use client::AnalyticsEventsClient;
pub use events::AppServerRpcTransport;
pub use facts::AppInvocation;
pub use facts::InvocationType;
pub use facts::SkillInvocation;
pub use facts::TrackEventsContext;
pub use facts::build_track_events_context;
#[cfg(test)]
mod analytics_client_tests;

View File

@@ -1,305 +0,0 @@
use crate::events::AppServerRpcTransport;
use crate::events::CodexAppMentionedEventRequest;
use crate::events::CodexAppServerClientMetadata;
use crate::events::CodexAppUsedEventRequest;
use crate::events::CodexPluginEventRequest;
use crate::events::CodexPluginUsedEventRequest;
use crate::events::CodexRuntimeMetadata;
use crate::events::SkillInvocationEventParams;
use crate::events::SkillInvocationEventRequest;
use crate::events::ThreadInitializationMode;
use crate::events::ThreadInitializedEvent;
use crate::events::ThreadInitializedEventParams;
use crate::events::TrackEventRequest;
use crate::events::codex_app_metadata;
use crate::events::codex_plugin_metadata;
use crate::events::codex_plugin_used_metadata;
use crate::events::plugin_state_event_type;
use crate::events::thread_source_name;
use crate::facts::AnalyticsFact;
use crate::facts::AppMentionedInput;
use crate::facts::AppUsedInput;
use crate::facts::CustomAnalyticsFact;
use crate::facts::PluginState;
use crate::facts::PluginStateChangedInput;
use crate::facts::PluginUsedInput;
use crate::facts::SkillInvokedInput;
use codex_app_server_protocol::ClientResponse;
use codex_app_server_protocol::InitializeParams;
use codex_git_utils::collect_git_info;
use codex_git_utils::get_git_repo_root;
use codex_login::default_client::originator;
use codex_protocol::protocol::SessionSource;
use codex_protocol::protocol::SkillScope;
use sha1::Digest;
use std::collections::HashMap;
use std::path::Path;
#[derive(Default)]
pub(crate) struct AnalyticsReducer {
connections: HashMap<u64, ConnectionState>,
}
struct ConnectionState {
app_server_client: CodexAppServerClientMetadata,
runtime: CodexRuntimeMetadata,
}
impl AnalyticsReducer {
pub(crate) async fn ingest(&mut self, input: AnalyticsFact, out: &mut Vec<TrackEventRequest>) {
match input {
AnalyticsFact::Initialize {
connection_id,
params,
product_client_id,
runtime,
rpc_transport,
} => {
self.ingest_initialize(
connection_id,
params,
product_client_id,
runtime,
rpc_transport,
);
}
AnalyticsFact::Request {
connection_id: _connection_id,
request_id: _request_id,
request: _request,
} => {}
AnalyticsFact::Response {
connection_id,
response,
} => {
self.ingest_response(connection_id, *response, out);
}
AnalyticsFact::Notification(_notification) => {}
AnalyticsFact::Custom(input) => match input {
CustomAnalyticsFact::SkillInvoked(input) => {
self.ingest_skill_invoked(input, out).await;
}
CustomAnalyticsFact::AppMentioned(input) => {
self.ingest_app_mentioned(input, out);
}
CustomAnalyticsFact::AppUsed(input) => {
self.ingest_app_used(input, out);
}
CustomAnalyticsFact::PluginUsed(input) => {
self.ingest_plugin_used(input, out);
}
CustomAnalyticsFact::PluginStateChanged(input) => {
self.ingest_plugin_state_changed(input, out);
}
},
}
}
fn ingest_initialize(
&mut self,
connection_id: u64,
params: InitializeParams,
product_client_id: String,
runtime: CodexRuntimeMetadata,
rpc_transport: AppServerRpcTransport,
) {
self.connections.insert(
connection_id,
ConnectionState {
app_server_client: CodexAppServerClientMetadata {
product_client_id,
client_name: Some(params.client_info.name),
client_version: Some(params.client_info.version),
rpc_transport,
experimental_api_enabled: params
.capabilities
.map(|capabilities| capabilities.experimental_api),
},
runtime,
},
);
}
async fn ingest_skill_invoked(
&mut self,
input: SkillInvokedInput,
out: &mut Vec<TrackEventRequest>,
) {
let SkillInvokedInput {
tracking,
invocations,
} = input;
for invocation in invocations {
let skill_scope = match invocation.skill_scope {
SkillScope::User => "user",
SkillScope::Repo => "repo",
SkillScope::System => "system",
SkillScope::Admin => "admin",
};
let repo_root = get_git_repo_root(invocation.skill_path.as_path());
let repo_url = if let Some(root) = repo_root.as_ref() {
collect_git_info(root)
.await
.and_then(|info| info.repository_url)
} else {
None
};
let skill_id = skill_id_for_local_skill(
repo_url.as_deref(),
repo_root.as_deref(),
invocation.skill_path.as_path(),
invocation.skill_name.as_str(),
);
out.push(TrackEventRequest::SkillInvocation(
SkillInvocationEventRequest {
event_type: "skill_invocation",
skill_id,
skill_name: invocation.skill_name.clone(),
event_params: SkillInvocationEventParams {
thread_id: Some(tracking.thread_id.clone()),
invoke_type: Some(invocation.invocation_type),
model_slug: Some(tracking.model_slug.clone()),
product_client_id: Some(originator().value),
repo_url,
skill_scope: Some(skill_scope.to_string()),
},
},
));
}
}
fn ingest_app_mentioned(&mut self, input: AppMentionedInput, out: &mut Vec<TrackEventRequest>) {
let AppMentionedInput { tracking, mentions } = input;
out.extend(mentions.into_iter().map(|mention| {
let event_params = codex_app_metadata(&tracking, mention);
TrackEventRequest::AppMentioned(CodexAppMentionedEventRequest {
event_type: "codex_app_mentioned",
event_params,
})
}));
}
fn ingest_app_used(&mut self, input: AppUsedInput, out: &mut Vec<TrackEventRequest>) {
let AppUsedInput { tracking, app } = input;
let event_params = codex_app_metadata(&tracking, app);
out.push(TrackEventRequest::AppUsed(CodexAppUsedEventRequest {
event_type: "codex_app_used",
event_params,
}));
}
fn ingest_plugin_used(&mut self, input: PluginUsedInput, out: &mut Vec<TrackEventRequest>) {
let PluginUsedInput { tracking, plugin } = input;
out.push(TrackEventRequest::PluginUsed(CodexPluginUsedEventRequest {
event_type: "codex_plugin_used",
event_params: codex_plugin_used_metadata(&tracking, plugin),
}));
}
fn ingest_plugin_state_changed(
&mut self,
input: PluginStateChangedInput,
out: &mut Vec<TrackEventRequest>,
) {
let PluginStateChangedInput { plugin, state } = input;
let event = CodexPluginEventRequest {
event_type: plugin_state_event_type(state),
event_params: codex_plugin_metadata(plugin),
};
out.push(match state {
PluginState::Installed => TrackEventRequest::PluginInstalled(event),
PluginState::Uninstalled => TrackEventRequest::PluginUninstalled(event),
PluginState::Enabled => TrackEventRequest::PluginEnabled(event),
PluginState::Disabled => TrackEventRequest::PluginDisabled(event),
});
}
fn ingest_response(
&mut self,
connection_id: u64,
response: ClientResponse,
out: &mut Vec<TrackEventRequest>,
) {
let (thread, model, initialization_mode) = match response {
ClientResponse::ThreadStart { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::New,
),
ClientResponse::ThreadResume { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::Resumed,
),
ClientResponse::ThreadFork { response, .. } => (
response.thread,
response.model,
ThreadInitializationMode::Forked,
),
_ => return,
};
let thread_source: SessionSource = thread.source.into();
let Some(connection_state) = self.connections.get(&connection_id) else {
return;
};
out.push(TrackEventRequest::ThreadInitialized(
ThreadInitializedEvent {
event_type: "codex_thread_initialized",
event_params: ThreadInitializedEventParams {
thread_id: thread.id,
app_server_client: connection_state.app_server_client.clone(),
runtime: connection_state.runtime.clone(),
model,
ephemeral: thread.ephemeral,
thread_source: thread_source_name(&thread_source),
initialization_mode,
subagent_source: None,
parent_thread_id: None,
created_at: u64::try_from(thread.created_at).unwrap_or_default(),
},
},
));
}
}
pub(crate) fn skill_id_for_local_skill(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
skill_name: &str,
) -> String {
let path = normalize_path_for_skill_id(repo_url, repo_root, skill_path);
let prefix = if let Some(url) = repo_url {
format!("repo_{url}")
} else {
"personal".to_string()
};
let raw_id = format!("{prefix}_{path}_{skill_name}");
let mut hasher = sha1::Sha1::new();
sha1::Digest::update(&mut hasher, raw_id.as_bytes());
format!("{:x}", sha1::Digest::finalize(hasher))
}
/// Returns a normalized path for skill ID construction.
///
/// - Repo-scoped skills use a path relative to the repo root.
/// - User/admin/system skills use an absolute path.
pub(crate) fn normalize_path_for_skill_id(
repo_url: Option<&str>,
repo_root: Option<&Path>,
skill_path: &Path,
) -> String {
let resolved_path =
std::fs::canonicalize(skill_path).unwrap_or_else(|_| skill_path.to_path_buf());
match (repo_url, repo_root) {
(Some(_), Some(root)) => {
let resolved_root = std::fs::canonicalize(root).unwrap_or_else(|_| root.to_path_buf());
resolved_path
.strip_prefix(&resolved_root)
.unwrap_or(resolved_path.as_path())
.to_string_lossy()
.replace('\\', "/")
}
_ => resolved_path.to_string_lossy().replace('\\', "/"),
}
}

View File

@@ -8,9 +8,6 @@ license.workspace = true
name = "codex_ansi_escape"
path = "src/lib.rs"
[lints]
workspace = true
[dependencies]
ansi-to-tui = { workspace = true }
ratatui = { workspace = true, features = [

View File

@@ -18,14 +18,11 @@ codex-arg0 = { workspace = true }
codex-core = { workspace = true }
codex-feedback = { workspace = true }
codex-protocol = { workspace = true }
futures = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true, features = ["sync", "time", "rt"] }
tokio-tungstenite = { workspace = true }
toml = { workspace = true }
tracing = { workspace = true }
url = { workspace = true }
[dev-dependencies]
pretty_assertions = { workspace = true }

File diff suppressed because it is too large Load Diff

View File

@@ -1,982 +0,0 @@
/*
This module implements the websocket-backed app-server client transport.
It owns the remote connection lifecycle, including the initialize/initialized
handshake, JSON-RPC request/response routing, server-request resolution, and
notification streaming. The rest of the crate uses the same `AppServerEvent`
surface for both in-process and remote transports, so callers such as the TUI
can switch between them without changing their higher-level session logic.
*/
use std::collections::HashMap;
use std::collections::VecDeque;
use std::io::Error as IoError;
use std::io::ErrorKind;
use std::io::Result as IoResult;
use std::time::Duration;
use crate::AppServerEvent;
use crate::RequestResult;
use crate::SHUTDOWN_TIMEOUT;
use crate::TypedRequestError;
use crate::request_method_name;
use crate::server_notification_requires_delivery;
use codex_app_server_protocol::ClientInfo;
use codex_app_server_protocol::ClientNotification;
use codex_app_server_protocol::ClientRequest;
use codex_app_server_protocol::InitializeCapabilities;
use codex_app_server_protocol::InitializeParams;
use codex_app_server_protocol::JSONRPCError;
use codex_app_server_protocol::JSONRPCErrorError;
use codex_app_server_protocol::JSONRPCMessage;
use codex_app_server_protocol::JSONRPCNotification;
use codex_app_server_protocol::JSONRPCRequest;
use codex_app_server_protocol::JSONRPCResponse;
use codex_app_server_protocol::RequestId;
use codex_app_server_protocol::Result as JsonRpcResult;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::ServerRequest;
use futures::SinkExt;
use futures::StreamExt;
use serde::de::DeserializeOwned;
use tokio::net::TcpStream;
use tokio::sync::mpsc;
use tokio::sync::oneshot;
use tokio::time::timeout;
use tokio_tungstenite::MaybeTlsStream;
use tokio_tungstenite::WebSocketStream;
use tokio_tungstenite::connect_async;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::tungstenite::client::IntoClientRequest;
use tokio_tungstenite::tungstenite::http::HeaderValue;
use tokio_tungstenite::tungstenite::http::header::AUTHORIZATION;
use tracing::warn;
use url::Url;
const CONNECT_TIMEOUT: Duration = Duration::from_secs(10);
const INITIALIZE_TIMEOUT: Duration = Duration::from_secs(10);
#[derive(Debug, Clone)]
pub struct RemoteAppServerConnectArgs {
pub websocket_url: String,
pub auth_token: Option<String>,
pub client_name: String,
pub client_version: String,
pub experimental_api: bool,
pub opt_out_notification_methods: Vec<String>,
pub channel_capacity: usize,
}
impl RemoteAppServerConnectArgs {
fn initialize_params(&self) -> InitializeParams {
let capabilities = InitializeCapabilities {
experimental_api: self.experimental_api,
opt_out_notification_methods: if self.opt_out_notification_methods.is_empty() {
None
} else {
Some(self.opt_out_notification_methods.clone())
},
};
InitializeParams {
client_info: ClientInfo {
name: self.client_name.clone(),
title: None,
version: self.client_version.clone(),
},
capabilities: Some(capabilities),
}
}
}
pub(crate) fn websocket_url_supports_auth_token(url: &Url) -> bool {
match (url.scheme(), url.host()) {
("wss", Some(_)) => true,
("ws", Some(url::Host::Domain(domain))) => domain.eq_ignore_ascii_case("localhost"),
("ws", Some(url::Host::Ipv4(addr))) => addr.is_loopback(),
("ws", Some(url::Host::Ipv6(addr))) => addr.is_loopback(),
_ => false,
}
}
enum RemoteClientCommand {
Request {
request: Box<ClientRequest>,
response_tx: oneshot::Sender<IoResult<RequestResult>>,
},
Notify {
notification: ClientNotification,
response_tx: oneshot::Sender<IoResult<()>>,
},
ResolveServerRequest {
request_id: RequestId,
result: JsonRpcResult,
response_tx: oneshot::Sender<IoResult<()>>,
},
RejectServerRequest {
request_id: RequestId,
error: JSONRPCErrorError,
response_tx: oneshot::Sender<IoResult<()>>,
},
Shutdown {
response_tx: oneshot::Sender<IoResult<()>>,
},
}
pub struct RemoteAppServerClient {
command_tx: mpsc::Sender<RemoteClientCommand>,
event_rx: mpsc::Receiver<AppServerEvent>,
pending_events: VecDeque<AppServerEvent>,
worker_handle: tokio::task::JoinHandle<()>,
}
#[derive(Clone)]
pub struct RemoteAppServerRequestHandle {
command_tx: mpsc::Sender<RemoteClientCommand>,
}
impl RemoteAppServerClient {
pub async fn connect(args: RemoteAppServerConnectArgs) -> IoResult<Self> {
let channel_capacity = args.channel_capacity.max(1);
let websocket_url = args.websocket_url.clone();
let url = Url::parse(&websocket_url).map_err(|err| {
IoError::new(
ErrorKind::InvalidInput,
format!("invalid websocket URL `{websocket_url}`: {err}"),
)
})?;
if args.auth_token.is_some() && !websocket_url_supports_auth_token(&url) {
return Err(IoError::new(
ErrorKind::InvalidInput,
format!(
"remote auth tokens require `wss://` or loopback `ws://` URLs; got `{websocket_url}`"
),
));
}
let mut request = url.as_str().into_client_request().map_err(|err| {
IoError::new(
ErrorKind::InvalidInput,
format!("invalid websocket URL `{websocket_url}`: {err}"),
)
})?;
if let Some(auth_token) = args.auth_token.as_deref() {
let header_value =
HeaderValue::from_str(&format!("Bearer {auth_token}")).map_err(|err| {
IoError::new(
ErrorKind::InvalidInput,
format!("invalid remote authorization header value: {err}"),
)
})?;
request.headers_mut().insert(AUTHORIZATION, header_value);
}
let stream = timeout(CONNECT_TIMEOUT, connect_async(request))
.await
.map_err(|_| {
IoError::new(
ErrorKind::TimedOut,
format!("timed out connecting to remote app server at `{websocket_url}`"),
)
})?
.map(|(stream, _response)| stream)
.map_err(|err| {
IoError::other(format!(
"failed to connect to remote app server at `{websocket_url}`: {err}"
))
})?;
let mut stream = stream;
let pending_events = initialize_remote_connection(
&mut stream,
&websocket_url,
args.initialize_params(),
INITIALIZE_TIMEOUT,
)
.await?;
let (command_tx, mut command_rx) = mpsc::channel::<RemoteClientCommand>(channel_capacity);
let (event_tx, event_rx) = mpsc::channel::<AppServerEvent>(channel_capacity);
let worker_handle = tokio::spawn(async move {
let mut pending_requests =
HashMap::<RequestId, oneshot::Sender<IoResult<RequestResult>>>::new();
let mut skipped_events = 0usize;
loop {
tokio::select! {
command = command_rx.recv() => {
let Some(command) = command else {
let _ = stream.close(None).await;
break;
};
match command {
RemoteClientCommand::Request { request, response_tx } => {
let request_id = request_id_from_client_request(&request);
if pending_requests.contains_key(&request_id) {
let _ = response_tx.send(Err(IoError::new(
ErrorKind::InvalidInput,
format!("duplicate remote app-server request id `{request_id}`"),
)));
continue;
}
pending_requests.insert(request_id.clone(), response_tx);
if let Err(err) = write_jsonrpc_message(
&mut stream,
JSONRPCMessage::Request(jsonrpc_request_from_client_request(*request)),
&websocket_url,
)
.await
{
let err_message = err.to_string();
if let Some(response_tx) = pending_requests.remove(&request_id) {
let _ = response_tx.send(Err(err));
}
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` write failed: {err_message}"
),
},
&mut stream,
)
.await;
break;
}
}
RemoteClientCommand::Notify { notification, response_tx } => {
let result = write_jsonrpc_message(
&mut stream,
JSONRPCMessage::Notification(
jsonrpc_notification_from_client_notification(notification),
),
&websocket_url,
)
.await;
let _ = response_tx.send(result);
}
RemoteClientCommand::ResolveServerRequest {
request_id,
result,
response_tx,
} => {
let result = write_jsonrpc_message(
&mut stream,
JSONRPCMessage::Response(JSONRPCResponse {
id: request_id,
result,
}),
&websocket_url,
)
.await;
let _ = response_tx.send(result);
}
RemoteClientCommand::RejectServerRequest {
request_id,
error,
response_tx,
} => {
let result = write_jsonrpc_message(
&mut stream,
JSONRPCMessage::Error(JSONRPCError {
error,
id: request_id,
}),
&websocket_url,
)
.await;
let _ = response_tx.send(result);
}
RemoteClientCommand::Shutdown { response_tx } => {
let close_result = stream.close(None).await.map_err(|err| {
IoError::other(format!(
"failed to close websocket app server `{websocket_url}`: {err}"
))
});
let _ = response_tx.send(close_result);
break;
}
}
}
message = stream.next() => {
match message {
Some(Ok(Message::Text(text))) => {
match serde_json::from_str::<JSONRPCMessage>(&text) {
Ok(JSONRPCMessage::Response(response)) => {
if let Some(response_tx) = pending_requests.remove(&response.id) {
let _ = response_tx.send(Ok(Ok(response.result)));
}
}
Ok(JSONRPCMessage::Error(error)) => {
if let Some(response_tx) = pending_requests.remove(&error.id) {
let _ = response_tx.send(Ok(Err(error.error)));
}
}
Ok(JSONRPCMessage::Notification(notification)) => {
if let Some(event) =
app_server_event_from_notification(notification)
&& let Err(err) = deliver_event(
&event_tx,
&mut skipped_events,
event,
&mut stream,
)
.await
{
warn!(%err, "failed to deliver remote app-server event");
break;
}
}
Ok(JSONRPCMessage::Request(request)) => {
let request_id = request.id.clone();
let method = request.method.clone();
match ServerRequest::try_from(request) {
Ok(request) => {
if let Err(err) = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::ServerRequest(request),
&mut stream,
)
.await
{
warn!(%err, "failed to deliver remote app-server server request");
break;
}
}
Err(err) => {
warn!(%err, method, "rejecting unknown remote app-server request");
if let Err(reject_err) = write_jsonrpc_message(
&mut stream,
JSONRPCMessage::Error(JSONRPCError {
error: JSONRPCErrorError {
code: -32601,
message: format!(
"unsupported remote app-server request `{method}`"
),
data: None,
},
id: request_id,
}),
&websocket_url,
)
.await
{
let err_message = reject_err.to_string();
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` write failed: {err_message}"
),
},
&mut stream,
)
.await;
break;
}
}
}
}
Err(err) => {
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` sent invalid JSON-RPC: {err}"
),
},
&mut stream,
)
.await;
break;
}
}
}
Some(Ok(Message::Close(frame))) => {
let reason = frame
.as_ref()
.map(|frame| frame.reason.to_string())
.filter(|reason| !reason.is_empty())
.unwrap_or_else(|| "connection closed".to_string());
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` disconnected: {reason}"
),
},
&mut stream,
)
.await;
break;
}
Some(Ok(Message::Binary(_)))
| Some(Ok(Message::Ping(_)))
| Some(Ok(Message::Pong(_)))
| Some(Ok(Message::Frame(_))) => {}
Some(Err(err)) => {
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` transport failed: {err}"
),
},
&mut stream,
)
.await;
break;
}
None => {
let _ = deliver_event(
&event_tx,
&mut skipped_events,
AppServerEvent::Disconnected {
message: format!(
"remote app server at `{websocket_url}` closed the connection"
),
},
&mut stream,
)
.await;
break;
}
}
}
}
}
let err = IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
);
for (_, response_tx) in pending_requests {
let _ = response_tx.send(Err(IoError::new(err.kind(), err.to_string())));
}
});
Ok(Self {
command_tx,
event_rx,
pending_events: pending_events.into(),
worker_handle,
})
}
pub fn request_handle(&self) -> RemoteAppServerRequestHandle {
RemoteAppServerRequestHandle {
command_tx: self.command_tx.clone(),
}
}
pub async fn request(&self, request: ClientRequest) -> IoResult<RequestResult> {
let (response_tx, response_rx) = oneshot::channel();
self.command_tx
.send(RemoteClientCommand::Request {
request: Box::new(request),
response_tx,
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
)
})?;
response_rx.await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server request channel is closed",
)
})?
}
pub async fn request_typed<T>(&self, request: ClientRequest) -> Result<T, TypedRequestError>
where
T: DeserializeOwned,
{
let method = request_method_name(&request);
let response =
self.request(request)
.await
.map_err(|source| TypedRequestError::Transport {
method: method.clone(),
source,
})?;
let result = response.map_err(|source| TypedRequestError::Server {
method: method.clone(),
source,
})?;
serde_json::from_value(result)
.map_err(|source| TypedRequestError::Deserialize { method, source })
}
pub async fn notify(&self, notification: ClientNotification) -> IoResult<()> {
let (response_tx, response_rx) = oneshot::channel();
self.command_tx
.send(RemoteClientCommand::Notify {
notification,
response_tx,
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
)
})?;
response_rx.await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server notify channel is closed",
)
})?
}
pub async fn resolve_server_request(
&self,
request_id: RequestId,
result: JsonRpcResult,
) -> IoResult<()> {
let (response_tx, response_rx) = oneshot::channel();
self.command_tx
.send(RemoteClientCommand::ResolveServerRequest {
request_id,
result,
response_tx,
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
)
})?;
response_rx.await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server resolve channel is closed",
)
})?
}
pub async fn reject_server_request(
&self,
request_id: RequestId,
error: JSONRPCErrorError,
) -> IoResult<()> {
let (response_tx, response_rx) = oneshot::channel();
self.command_tx
.send(RemoteClientCommand::RejectServerRequest {
request_id,
error,
response_tx,
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
)
})?;
response_rx.await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server reject channel is closed",
)
})?
}
pub async fn next_event(&mut self) -> Option<AppServerEvent> {
if let Some(event) = self.pending_events.pop_front() {
return Some(event);
}
self.event_rx.recv().await
}
pub async fn shutdown(self) -> IoResult<()> {
let Self {
command_tx,
event_rx,
pending_events: _pending_events,
worker_handle,
} = self;
let mut worker_handle = worker_handle;
drop(event_rx);
let (response_tx, response_rx) = oneshot::channel();
if command_tx
.send(RemoteClientCommand::Shutdown { response_tx })
.await
.is_ok()
&& let Ok(command_result) = timeout(SHUTDOWN_TIMEOUT, response_rx).await
{
command_result.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server shutdown channel is closed",
)
})??;
}
if let Err(_elapsed) = timeout(SHUTDOWN_TIMEOUT, &mut worker_handle).await {
worker_handle.abort();
let _ = worker_handle.await;
}
Ok(())
}
}
impl RemoteAppServerRequestHandle {
pub async fn request(&self, request: ClientRequest) -> IoResult<RequestResult> {
let (response_tx, response_rx) = oneshot::channel();
self.command_tx
.send(RemoteClientCommand::Request {
request: Box::new(request),
response_tx,
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server worker channel is closed",
)
})?;
response_rx.await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server request channel is closed",
)
})?
}
pub async fn request_typed<T>(&self, request: ClientRequest) -> Result<T, TypedRequestError>
where
T: DeserializeOwned,
{
let method = request_method_name(&request);
let response =
self.request(request)
.await
.map_err(|source| TypedRequestError::Transport {
method: method.clone(),
source,
})?;
let result = response.map_err(|source| TypedRequestError::Server {
method: method.clone(),
source,
})?;
serde_json::from_value(result)
.map_err(|source| TypedRequestError::Deserialize { method, source })
}
}
async fn initialize_remote_connection(
stream: &mut WebSocketStream<MaybeTlsStream<TcpStream>>,
websocket_url: &str,
params: InitializeParams,
initialize_timeout: Duration,
) -> IoResult<Vec<AppServerEvent>> {
let initialize_request_id = RequestId::String("initialize".to_string());
let mut pending_events = Vec::new();
write_jsonrpc_message(
stream,
JSONRPCMessage::Request(jsonrpc_request_from_client_request(
ClientRequest::Initialize {
request_id: initialize_request_id.clone(),
params,
},
)),
websocket_url,
)
.await?;
timeout(initialize_timeout, async {
loop {
match stream.next().await {
Some(Ok(Message::Text(text))) => {
let message = serde_json::from_str::<JSONRPCMessage>(&text).map_err(|err| {
IoError::other(format!(
"remote app server at `{websocket_url}` sent invalid initialize response: {err}"
))
})?;
match message {
JSONRPCMessage::Response(response) if response.id == initialize_request_id => {
break Ok(());
}
JSONRPCMessage::Error(error) if error.id == initialize_request_id => {
break Err(IoError::other(format!(
"remote app server at `{websocket_url}` rejected initialize: {}",
error.error.message
)));
}
JSONRPCMessage::Notification(notification) => {
if let Some(event) = app_server_event_from_notification(notification) {
pending_events.push(event);
}
}
JSONRPCMessage::Request(request) => {
let request_id = request.id.clone();
let method = request.method.clone();
match ServerRequest::try_from(request) {
Ok(request) => {
pending_events.push(AppServerEvent::ServerRequest(request));
}
Err(err) => {
warn!(%err, method, "rejecting unknown remote app-server request during initialize");
write_jsonrpc_message(
stream,
JSONRPCMessage::Error(JSONRPCError {
error: JSONRPCErrorError {
code: -32601,
message: format!(
"unsupported remote app-server request `{method}`"
),
data: None,
},
id: request_id,
}),
websocket_url,
)
.await?;
}
}
}
JSONRPCMessage::Response(_) | JSONRPCMessage::Error(_) => {}
}
}
Some(Ok(Message::Binary(_)))
| Some(Ok(Message::Ping(_)))
| Some(Ok(Message::Pong(_)))
| Some(Ok(Message::Frame(_))) => {}
Some(Ok(Message::Close(frame))) => {
let reason = frame
.as_ref()
.map(|frame| frame.reason.to_string())
.filter(|reason| !reason.is_empty())
.unwrap_or_else(|| "connection closed during initialize".to_string());
break Err(IoError::new(
ErrorKind::ConnectionAborted,
format!(
"remote app server at `{websocket_url}` closed during initialize: {reason}"
),
));
}
Some(Err(err)) => {
break Err(IoError::other(format!(
"remote app server at `{websocket_url}` transport failed during initialize: {err}"
)));
}
None => {
break Err(IoError::new(
ErrorKind::UnexpectedEof,
format!("remote app server at `{websocket_url}` closed during initialize"),
));
}
}
}
})
.await
.map_err(|_| {
IoError::new(
ErrorKind::TimedOut,
format!("timed out waiting for initialize response from `{websocket_url}`"),
)
})??;
write_jsonrpc_message(
stream,
JSONRPCMessage::Notification(jsonrpc_notification_from_client_notification(
ClientNotification::Initialized,
)),
websocket_url,
)
.await?;
Ok(pending_events)
}
fn app_server_event_from_notification(notification: JSONRPCNotification) -> Option<AppServerEvent> {
match ServerNotification::try_from(notification) {
Ok(notification) => Some(AppServerEvent::ServerNotification(notification)),
Err(_) => None,
}
}
async fn deliver_event(
event_tx: &mpsc::Sender<AppServerEvent>,
skipped_events: &mut usize,
event: AppServerEvent,
stream: &mut WebSocketStream<MaybeTlsStream<TcpStream>>,
) -> IoResult<()> {
if *skipped_events > 0 {
if event_requires_delivery(&event) {
if event_tx
.send(AppServerEvent::Lagged {
skipped: *skipped_events,
})
.await
.is_err()
{
return Err(IoError::new(
ErrorKind::BrokenPipe,
"remote app-server event consumer channel is closed",
));
}
*skipped_events = 0;
} else {
match event_tx.try_send(AppServerEvent::Lagged {
skipped: *skipped_events,
}) {
Ok(()) => *skipped_events = 0,
Err(mpsc::error::TrySendError::Full(_)) => {
*skipped_events = (*skipped_events).saturating_add(1);
reject_if_server_request_dropped(stream, &event).await?;
return Ok(());
}
Err(mpsc::error::TrySendError::Closed(_)) => {
return Err(IoError::new(
ErrorKind::BrokenPipe,
"remote app-server event consumer channel is closed",
));
}
}
}
}
if event_requires_delivery(&event) {
event_tx.send(event).await.map_err(|_| {
IoError::new(
ErrorKind::BrokenPipe,
"remote app-server event consumer channel is closed",
)
})?;
return Ok(());
}
match event_tx.try_send(event) {
Ok(()) => Ok(()),
Err(mpsc::error::TrySendError::Full(event)) => {
*skipped_events = (*skipped_events).saturating_add(1);
reject_if_server_request_dropped(stream, &event).await
}
Err(mpsc::error::TrySendError::Closed(_)) => Err(IoError::new(
ErrorKind::BrokenPipe,
"remote app-server event consumer channel is closed",
)),
}
}
async fn reject_if_server_request_dropped(
stream: &mut WebSocketStream<MaybeTlsStream<TcpStream>>,
event: &AppServerEvent,
) -> IoResult<()> {
let AppServerEvent::ServerRequest(request) = event else {
return Ok(());
};
write_jsonrpc_message(
stream,
JSONRPCMessage::Error(JSONRPCError {
error: JSONRPCErrorError {
code: -32001,
message: "remote app-server event queue is full".to_string(),
data: None,
},
id: request.id().clone(),
}),
"<remote-app-server>",
)
.await
}
fn event_requires_delivery(event: &AppServerEvent) -> bool {
match event {
AppServerEvent::ServerNotification(notification) => {
server_notification_requires_delivery(notification)
}
AppServerEvent::Disconnected { .. } => true,
AppServerEvent::Lagged { .. } | AppServerEvent::ServerRequest(_) => false,
}
}
fn request_id_from_client_request(request: &ClientRequest) -> RequestId {
jsonrpc_request_from_client_request(request.clone()).id
}
fn jsonrpc_request_from_client_request(request: ClientRequest) -> JSONRPCRequest {
let value = match serde_json::to_value(request) {
Ok(value) => value,
Err(err) => panic!("client request should serialize: {err}"),
};
match serde_json::from_value(value) {
Ok(request) => request,
Err(err) => panic!("client request should encode as JSON-RPC request: {err}"),
}
}
fn jsonrpc_notification_from_client_notification(
notification: ClientNotification,
) -> JSONRPCNotification {
let value = match serde_json::to_value(notification) {
Ok(value) => value,
Err(err) => panic!("client notification should serialize: {err}"),
};
match serde_json::from_value(value) {
Ok(notification) => notification,
Err(err) => panic!("client notification should encode as JSON-RPC notification: {err}"),
}
}
async fn write_jsonrpc_message(
stream: &mut WebSocketStream<MaybeTlsStream<TcpStream>>,
message: JSONRPCMessage,
websocket_url: &str,
) -> IoResult<()> {
let payload = serde_json::to_string(&message).map_err(IoError::other)?;
stream
.send(Message::Text(payload.into()))
.await
.map_err(|err| {
IoError::other(format!(
"failed to write websocket message to `{websocket_url}`: {err}"
))
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn event_requires_delivery_marks_transcript_and_disconnect_events() {
assert!(event_requires_delivery(
&AppServerEvent::ServerNotification(ServerNotification::AgentMessageDelta(
codex_app_server_protocol::AgentMessageDeltaNotification {
thread_id: "thread".to_string(),
turn_id: "turn".to_string(),
item_id: "item".to_string(),
delta: "hello".to_string(),
},
),)
));
assert!(event_requires_delivery(
&AppServerEvent::ServerNotification(ServerNotification::ItemCompleted(
codex_app_server_protocol::ItemCompletedNotification {
thread_id: "thread".to_string(),
turn_id: "turn".to_string(),
item: codex_app_server_protocol::ThreadItem::Plan {
id: "item".to_string(),
text: "step".to_string(),
},
}
),)
));
assert!(event_requires_delivery(&AppServerEvent::Disconnected {
message: "closed".to_string(),
}));
assert!(!event_requires_delivery(&AppServerEvent::Lagged {
skipped: 1
}));
}
}

View File

@@ -14,9 +14,8 @@ workspace = true
[dependencies]
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
codex-experimental-api-macros = { workspace = true }
codex-git-utils = { workspace = true }
codex-protocol = { workspace = true }
codex-experimental-api-macros = { workspace = true }
codex-utils-absolute-path = { workspace = true }
schemars = { workspace = true }
serde = { workspace = true, features = ["derive"] }

File diff suppressed because it is too large Load Diff

View File

@@ -28,6 +28,29 @@
},
"type": "object"
},
"AdditionalMacOsPermissions": {
"properties": {
"accessibility": {
"type": "boolean"
},
"automations": {
"$ref": "#/definitions/MacOsAutomationPermission"
},
"calendar": {
"type": "boolean"
},
"preferences": {
"$ref": "#/definitions/MacOsPreferencesPermission"
}
},
"required": [
"accessibility",
"automations",
"calendar",
"preferences"
],
"type": "object"
},
"AdditionalNetworkPermissions": {
"properties": {
"enabled": {
@@ -51,6 +74,16 @@
}
]
},
"macos": {
"anyOf": [
{
"$ref": "#/definitions/AdditionalMacOsPermissions"
},
{
"type": "null"
}
]
},
"network": {
"anyOf": [
{
@@ -253,6 +286,52 @@
}
]
},
"CommandExecutionRequestApprovalSkillMetadata": {
"properties": {
"pathToSkillsMd": {
"type": "string"
}
},
"required": [
"pathToSkillsMd"
],
"type": "object"
},
"MacOsAutomationPermission": {
"oneOf": [
{
"enum": [
"none",
"all"
],
"type": "string"
},
{
"additionalProperties": false,
"properties": {
"bundle_ids": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"bundle_ids"
],
"title": "BundleIdsMacOsAutomationPermission",
"type": "object"
}
]
},
"MacOsPreferencesPermission": {
"enum": [
"none",
"read_only",
"read_write"
],
"type": "string"
},
"NetworkApprovalContext": {
"properties": {
"host": {

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"FuzzyFileSearchMatchType": {
"enum": [
"file",
"directory"
],
"type": "string"
},
"FuzzyFileSearchResult": {
"description": "Superset of [`codex_file_search::FileMatch`]",
"properties": {
@@ -25,9 +18,6 @@
"null"
]
},
"match_type": {
"$ref": "#/definitions/FuzzyFileSearchMatchType"
},
"path": {
"type": "string"
},
@@ -42,7 +32,6 @@
},
"required": [
"file_name",
"match_type",
"path",
"root",
"score"

View File

@@ -1,13 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"FuzzyFileSearchMatchType": {
"enum": [
"file",
"directory"
],
"type": "string"
},
"FuzzyFileSearchResult": {
"description": "Superset of [`codex_file_search::FileMatch`]",
"properties": {
@@ -25,9 +18,6 @@
"null"
]
},
"match_type": {
"$ref": "#/definitions/FuzzyFileSearchMatchType"
},
"path": {
"type": "string"
},
@@ -42,7 +32,6 @@
},
"required": [
"file_name",
"match_type",
"path",
"root",
"score"

View File

@@ -28,6 +28,29 @@
},
"type": "object"
},
"AdditionalMacOsPermissions": {
"properties": {
"accessibility": {
"type": "boolean"
},
"automations": {
"$ref": "#/definitions/MacOsAutomationPermission"
},
"calendar": {
"type": "boolean"
},
"preferences": {
"$ref": "#/definitions/MacOsPreferencesPermission"
}
},
"required": [
"accessibility",
"automations",
"calendar",
"preferences"
],
"type": "object"
},
"AdditionalNetworkPermissions": {
"properties": {
"enabled": {
@@ -39,8 +62,7 @@
},
"type": "object"
},
"RequestPermissionProfile": {
"additionalProperties": false,
"AdditionalPermissionProfile": {
"properties": {
"fileSystem": {
"anyOf": [
@@ -52,6 +74,16 @@
}
]
},
"macos": {
"anyOf": [
{
"$ref": "#/definitions/AdditionalMacOsPermissions"
},
{
"type": "null"
}
]
},
"network": {
"anyOf": [
{
@@ -64,6 +96,41 @@
}
},
"type": "object"
},
"MacOsAutomationPermission": {
"oneOf": [
{
"enum": [
"none",
"all"
],
"type": "string"
},
{
"additionalProperties": false,
"properties": {
"bundle_ids": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"bundle_ids"
],
"title": "BundleIdsMacOsAutomationPermission",
"type": "object"
}
]
},
"MacOsPreferencesPermission": {
"enum": [
"none",
"read_only",
"read_write"
],
"type": "string"
}
},
"properties": {
@@ -71,7 +138,7 @@
"type": "string"
},
"permissions": {
"$ref": "#/definitions/RequestPermissionProfile"
"$ref": "#/definitions/AdditionalPermissionProfile"
},
"reason": {
"type": [

View File

@@ -39,6 +39,43 @@
},
"type": "object"
},
"GrantedMacOsPermissions": {
"properties": {
"accessibility": {
"type": [
"boolean",
"null"
]
},
"automations": {
"anyOf": [
{
"$ref": "#/definitions/MacOsAutomationPermission"
},
{
"type": "null"
}
]
},
"calendar": {
"type": [
"boolean",
"null"
]
},
"preferences": {
"anyOf": [
{
"$ref": "#/definitions/MacOsPreferencesPermission"
},
{
"type": "null"
}
]
}
},
"type": "object"
},
"GrantedPermissionProfile": {
"properties": {
"fileSystem": {
@@ -51,6 +88,16 @@
}
]
},
"macos": {
"anyOf": [
{
"$ref": "#/definitions/GrantedMacOsPermissions"
},
{
"type": "null"
}
]
},
"network": {
"anyOf": [
{
@@ -64,10 +111,38 @@
},
"type": "object"
},
"PermissionGrantScope": {
"MacOsAutomationPermission": {
"oneOf": [
{
"enum": [
"none",
"all"
],
"type": "string"
},
{
"additionalProperties": false,
"properties": {
"bundle_ids": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"bundle_ids"
],
"title": "BundleIdsMacOsAutomationPermission",
"type": "object"
}
]
},
"MacOsPreferencesPermission": {
"enum": [
"turn",
"session"
"none",
"read_only",
"read_write"
],
"type": "string"
}
@@ -75,14 +150,6 @@
"properties": {
"permissions": {
"$ref": "#/definitions/GrantedPermissionProfile"
},
"scope": {
"allOf": [
{
"$ref": "#/definitions/PermissionGrantScope"
}
],
"default": "turn"
}
},
"required": [

View File

@@ -1,10 +1,6 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
},
"AccountLoginCompletedNotification": {
"properties": {
"error": {
@@ -87,9 +83,6 @@
],
"type": "object"
},
"AgentPath": {
"type": "string"
},
"AppBranding": {
"description": "EXPERIMENTAL - app metadata returned by app-list APIs.",
"properties": {
@@ -518,28 +511,6 @@
],
"title": "ResponseTooManyFailedAttemptsCodexErrorInfo",
"type": "object"
},
{
"additionalProperties": false,
"description": "Returned when `turn/start` or `turn/steer` is submitted while the current active turn cannot accept same-turn steering, for example `/review` or manual `/compact`.",
"properties": {
"activeTurnNotSteerable": {
"properties": {
"turnKind": {
"$ref": "#/definitions/NonSteerableTurnKind"
}
},
"required": [
"turnKind"
],
"type": "object"
}
},
"required": [
"activeTurnNotSteerable"
],
"title": "ActiveTurnNotSteerableCodexErrorInfo",
"type": "object"
}
]
},
@@ -564,7 +535,6 @@
"enum": [
"pendingInit",
"running",
"interrupted",
"completed",
"errored",
"shutdown",
@@ -774,15 +744,6 @@
],
"type": "object"
},
"CommandExecutionSource": {
"enum": [
"agent",
"userShell",
"unifiedExecStartup",
"unifiedExecInteraction"
],
"type": "string"
},
"CommandExecutionStatus": {
"enum": [
"inProgress",
@@ -1002,34 +963,6 @@
],
"type": "object"
},
"FsChangedNotification": {
"description": "Filesystem watch notification emitted for `fs/watch` subscribers.",
"properties": {
"changedPaths": {
"description": "File or directory paths associated with this event.",
"items": {
"$ref": "#/definitions/AbsolutePathBuf"
},
"type": "array"
},
"watchId": {
"description": "Watch identifier returned by `fs/watch`.",
"type": "string"
}
},
"required": [
"changedPaths",
"watchId"
],
"type": "object"
},
"FuzzyFileSearchMatchType": {
"enum": [
"file",
"directory"
],
"type": "string"
},
"FuzzyFileSearchResult": {
"description": "Superset of [`codex_file_search::FileMatch`]",
"properties": {
@@ -1047,9 +980,6 @@
"null"
]
},
"match_type": {
"$ref": "#/definitions/FuzzyFileSearchMatchType"
},
"path": {
"type": "string"
},
@@ -1064,7 +994,6 @@
},
"required": [
"file_name",
"match_type",
"path",
"root",
"score"
@@ -1127,257 +1056,6 @@
},
"type": "object"
},
"GuardianApprovalReview": {
"description": "[UNSTABLE] Temporary guardian approval review payload used by `item/autoApprovalReview/*` notifications. This shape is expected to change soon.",
"properties": {
"rationale": {
"type": [
"string",
"null"
]
},
"riskLevel": {
"anyOf": [
{
"$ref": "#/definitions/GuardianRiskLevel"
},
{
"type": "null"
}
]
},
"riskScore": {
"format": "uint8",
"minimum": 0.0,
"type": [
"integer",
"null"
]
},
"status": {
"$ref": "#/definitions/GuardianApprovalReviewStatus"
}
},
"required": [
"status"
],
"type": "object"
},
"GuardianApprovalReviewStatus": {
"description": "[UNSTABLE] Lifecycle state for a guardian approval review.",
"enum": [
"inProgress",
"approved",
"denied",
"aborted"
],
"type": "string"
},
"GuardianRiskLevel": {
"description": "[UNSTABLE] Risk level assigned by guardian approval review.",
"enum": [
"low",
"medium",
"high"
],
"type": "string"
},
"HookCompletedNotification": {
"properties": {
"run": {
"$ref": "#/definitions/HookRunSummary"
},
"threadId": {
"type": "string"
},
"turnId": {
"type": [
"string",
"null"
]
}
},
"required": [
"run",
"threadId"
],
"type": "object"
},
"HookEventName": {
"enum": [
"preToolUse",
"postToolUse",
"sessionStart",
"userPromptSubmit",
"stop"
],
"type": "string"
},
"HookExecutionMode": {
"enum": [
"sync",
"async"
],
"type": "string"
},
"HookHandlerType": {
"enum": [
"command",
"prompt",
"agent"
],
"type": "string"
},
"HookOutputEntry": {
"properties": {
"kind": {
"$ref": "#/definitions/HookOutputEntryKind"
},
"text": {
"type": "string"
}
},
"required": [
"kind",
"text"
],
"type": "object"
},
"HookOutputEntryKind": {
"enum": [
"warning",
"stop",
"feedback",
"context",
"error"
],
"type": "string"
},
"HookPromptFragment": {
"properties": {
"hookRunId": {
"type": "string"
},
"text": {
"type": "string"
}
},
"required": [
"hookRunId",
"text"
],
"type": "object"
},
"HookRunStatus": {
"enum": [
"running",
"completed",
"failed",
"blocked",
"stopped"
],
"type": "string"
},
"HookRunSummary": {
"properties": {
"completedAt": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"displayOrder": {
"format": "int64",
"type": "integer"
},
"durationMs": {
"format": "int64",
"type": [
"integer",
"null"
]
},
"entries": {
"items": {
"$ref": "#/definitions/HookOutputEntry"
},
"type": "array"
},
"eventName": {
"$ref": "#/definitions/HookEventName"
},
"executionMode": {
"$ref": "#/definitions/HookExecutionMode"
},
"handlerType": {
"$ref": "#/definitions/HookHandlerType"
},
"id": {
"type": "string"
},
"scope": {
"$ref": "#/definitions/HookScope"
},
"sourcePath": {
"type": "string"
},
"startedAt": {
"format": "int64",
"type": "integer"
},
"status": {
"$ref": "#/definitions/HookRunStatus"
},
"statusMessage": {
"type": [
"string",
"null"
]
}
},
"required": [
"displayOrder",
"entries",
"eventName",
"executionMode",
"handlerType",
"id",
"scope",
"sourcePath",
"startedAt",
"status"
],
"type": "object"
},
"HookScope": {
"enum": [
"thread",
"turn"
],
"type": "string"
},
"HookStartedNotification": {
"properties": {
"run": {
"$ref": "#/definitions/HookRunSummary"
},
"threadId": {
"type": "string"
},
"turnId": {
"type": [
"string",
"null"
]
}
},
"required": [
"run",
"threadId"
],
"type": "object"
},
"ItemCompletedNotification": {
"properties": {
"item": {
@@ -1397,56 +1075,6 @@
],
"type": "object"
},
"ItemGuardianApprovalReviewCompletedNotification": {
"description": "[UNSTABLE] Temporary notification payload for guardian automatic approval review. This shape is expected to change soon.\n\nTODO(ccunningham): Attach guardian review state to the reviewed tool item's lifecycle instead of sending separate standalone review notifications so the app-server API can persist and replay review state via `thread/read`.",
"properties": {
"action": true,
"review": {
"$ref": "#/definitions/GuardianApprovalReview"
},
"targetItemId": {
"type": "string"
},
"threadId": {
"type": "string"
},
"turnId": {
"type": "string"
}
},
"required": [
"review",
"targetItemId",
"threadId",
"turnId"
],
"type": "object"
},
"ItemGuardianApprovalReviewStartedNotification": {
"description": "[UNSTABLE] Temporary notification payload for guardian automatic approval review. This shape is expected to change soon.\n\nTODO(ccunningham): Attach guardian review state to the reviewed tool item's lifecycle instead of sending separate standalone review notifications so the app-server API can persist and replay review state via `thread/read`.",
"properties": {
"action": true,
"review": {
"$ref": "#/definitions/GuardianApprovalReview"
},
"targetItemId": {
"type": "string"
},
"threadId": {
"type": "string"
},
"turnId": {
"type": "string"
}
},
"required": [
"review",
"targetItemId",
"threadId",
"turnId"
],
"type": "object"
},
"ItemStartedNotification": {
"properties": {
"item": {
@@ -1487,36 +1115,6 @@
],
"type": "object"
},
"McpServerStartupState": {
"enum": [
"starting",
"ready",
"failed",
"cancelled"
],
"type": "string"
},
"McpServerStatusUpdatedNotification": {
"properties": {
"error": {
"type": [
"string",
"null"
]
},
"name": {
"type": "string"
},
"status": {
"$ref": "#/definitions/McpServerStartupState"
}
},
"required": [
"name",
"status"
],
"type": "object"
},
"McpToolCallError": {
"properties": {
"message": {
@@ -1572,54 +1170,6 @@
],
"type": "string"
},
"MemoryCitation": {
"properties": {
"entries": {
"items": {
"$ref": "#/definitions/MemoryCitationEntry"
},
"type": "array"
},
"threadIds": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"entries",
"threadIds"
],
"type": "object"
},
"MemoryCitationEntry": {
"properties": {
"lineEnd": {
"format": "uint32",
"minimum": 0.0,
"type": "integer"
},
"lineStart": {
"format": "uint32",
"minimum": 0.0,
"type": "integer"
},
"note": {
"type": "string"
},
"path": {
"type": "string"
}
},
"required": [
"lineEnd",
"lineStart",
"note",
"path"
],
"type": "object"
},
"MessagePhase": {
"description": "Classifies an assistant message as interim commentary or final answer text.\n\nProviders do not emit this consistently, so callers must treat `None` as \"phase unknown\" and keep compatibility behavior for legacy models.",
"oneOf": [
@@ -1672,13 +1222,6 @@
],
"type": "object"
},
"NonSteerableTurnKind": {
"enum": [
"review",
"compact"
],
"type": "string"
},
"PatchApplyStatus": {
"enum": [
"inProgress",
@@ -1777,9 +1320,7 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"
@@ -1869,25 +1410,6 @@
],
"type": "object"
},
"RealtimeConversationVersion": {
"enum": [
"v1",
"v2"
],
"type": "string"
},
"ReasoningEffort": {
"description": "See https://platform.openai.com/docs/guides/reasoning?api-mode=responses#get-started-with-reasoning",
"enum": [
"none",
"minimal",
"low",
"medium",
"high",
"xhigh"
],
"type": "string"
},
"ReasoningSummaryPartAddedNotification": {
"properties": {
"itemId": {
@@ -2006,19 +1528,6 @@
],
"type": "string"
},
{
"additionalProperties": false,
"properties": {
"custom": {
"type": "string"
}
},
"required": [
"custom"
],
"title": "CustomSessionSource",
"type": "object"
},
{
"additionalProperties": false,
"properties": {
@@ -2060,17 +1569,6 @@
"null"
]
},
"agent_path": {
"anyOf": [
{
"$ref": "#/definitions/AgentPath"
},
{
"type": "null"
}
],
"default": null
},
"agent_role": {
"default": null,
"type": [
@@ -2376,47 +1874,9 @@
},
{
"properties": {
"fragments": {
"items": {
"$ref": "#/definitions/HookPromptFragment"
},
"type": "array"
},
"id": {
"type": "string"
},
"type": {
"enum": [
"hookPrompt"
],
"title": "HookPromptThreadItemType",
"type": "string"
}
},
"required": [
"fragments",
"id",
"type"
],
"title": "HookPromptThreadItem",
"type": "object"
},
{
"properties": {
"id": {
"type": "string"
},
"memoryCitation": {
"anyOf": [
{
"$ref": "#/definitions/MemoryCitation"
},
{
"type": "null"
}
],
"default": null
},
"phase": {
"anyOf": [
{
@@ -2556,14 +2016,6 @@
"null"
]
},
"source": {
"allOf": [
{
"$ref": "#/definitions/CommandExecutionSource"
}
],
"default": "agent"
},
"status": {
"$ref": "#/definitions/CommandExecutionStatus"
},
@@ -2745,13 +2197,6 @@
"description": "Unique identifier for this collab tool call.",
"type": "string"
},
"model": {
"description": "Model requested for the spawned agent, when applicable.",
"type": [
"string",
"null"
]
},
"prompt": {
"description": "Prompt text sent as part of the collab tool call, when available.",
"type": [
@@ -2759,17 +2204,6 @@
"null"
]
},
"reasoningEffort": {
"anyOf": [
{
"$ref": "#/definitions/ReasoningEffort"
},
{
"type": "null"
}
],
"description": "Reasoning effort requested for the spawned agent, when applicable."
},
"receiverThreadIds": {
"description": "Thread ID of the receiving agent, when applicable. In case of spawn operation, this corresponds to the newly spawned agent.",
"items": {
@@ -2889,12 +2323,6 @@
"null"
]
},
"savedPath": {
"type": [
"string",
"null"
]
},
"status": {
"type": "string"
},
@@ -3008,12 +2436,6 @@
"data": {
"type": "string"
},
"itemId": {
"type": [
"string",
"null"
]
},
"numChannels": {
"format": "uint16",
"minimum": 0.0,
@@ -3115,33 +2537,9 @@
},
"threadId": {
"type": "string"
},
"version": {
"$ref": "#/definitions/RealtimeConversationVersion"
}
},
"required": [
"threadId",
"version"
],
"type": "object"
},
"ThreadRealtimeTranscriptUpdatedNotification": {
"description": "EXPERIMENTAL - flat transcript delta emitted whenever realtime transcript text changes.",
"properties": {
"role": {
"type": "string"
},
"text": {
"type": "string"
},
"threadId": {
"type": "string"
}
},
"required": [
"role",
"text",
"threadId"
],
"type": "object"
@@ -3980,26 +3378,6 @@
"title": "Turn/startedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"hook/started"
],
"title": "Hook/startedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/HookStartedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Hook/startedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -4020,26 +3398,6 @@
"title": "Turn/completedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"hook/completed"
],
"title": "Hook/completedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/HookCompletedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Hook/completedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -4100,46 +3458,6 @@
"title": "Item/startedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"item/autoApprovalReview/started"
],
"title": "Item/autoApprovalReview/startedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/ItemGuardianApprovalReviewStartedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Item/autoApprovalReview/startedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"item/autoApprovalReview/completed"
],
"title": "Item/autoApprovalReview/completedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/ItemGuardianApprovalReviewCompletedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Item/autoApprovalReview/completedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -4342,26 +3660,6 @@
"title": "McpServer/oauthLogin/completedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"mcpServer/startupStatus/updated"
],
"title": "McpServer/startupStatus/updatedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/McpServerStatusUpdatedNotification"
}
},
"required": [
"method",
"params"
],
"title": "McpServer/startupStatus/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -4422,26 +3720,6 @@
"title": "App/list/updatedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"fs/changed"
],
"title": "Fs/changedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/FsChangedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Fs/changedNotification",
"type": "object"
},
{
"properties": {
"method": {
@@ -4663,26 +3941,6 @@
"title": "Thread/realtime/itemAddedNotification",
"type": "object"
},
{
"properties": {
"method": {
"enum": [
"thread/realtime/transcriptUpdated"
],
"title": "Thread/realtime/transcriptUpdatedNotificationMethod",
"type": "string"
},
"params": {
"$ref": "#/definitions/ThreadRealtimeTranscriptUpdatedNotification"
}
},
"required": [
"method",
"params"
],
"title": "Thread/realtime/transcriptUpdatedNotification",
"type": "object"
},
{
"properties": {
"method": {

View File

@@ -28,6 +28,29 @@
},
"type": "object"
},
"AdditionalMacOsPermissions": {
"properties": {
"accessibility": {
"type": "boolean"
},
"automations": {
"$ref": "#/definitions/MacOsAutomationPermission"
},
"calendar": {
"type": "boolean"
},
"preferences": {
"$ref": "#/definitions/MacOsPreferencesPermission"
}
},
"required": [
"accessibility",
"automations",
"calendar",
"preferences"
],
"type": "object"
},
"AdditionalNetworkPermissions": {
"properties": {
"enabled": {
@@ -51,6 +74,16 @@
}
]
},
"macos": {
"anyOf": [
{
"$ref": "#/definitions/AdditionalMacOsPermissions"
},
{
"type": "null"
}
]
},
"network": {
"anyOf": [
{
@@ -407,6 +440,17 @@
],
"type": "object"
},
"CommandExecutionRequestApprovalSkillMetadata": {
"properties": {
"pathToSkillsMd": {
"type": "string"
}
},
"required": [
"pathToSkillsMd"
],
"type": "object"
},
"DynamicToolCallParams": {
"properties": {
"arguments": true,
@@ -582,6 +626,41 @@
],
"type": "object"
},
"MacOsAutomationPermission": {
"oneOf": [
{
"enum": [
"none",
"all"
],
"type": "string"
},
{
"additionalProperties": false,
"properties": {
"bundle_ids": {
"items": {
"type": "string"
},
"type": "array"
}
},
"required": [
"bundle_ids"
],
"title": "BundleIdsMacOsAutomationPermission",
"type": "object"
}
]
},
"MacOsPreferencesPermission": {
"enum": [
"none",
"read_only",
"read_write"
],
"type": "string"
},
"McpElicitationArrayType": {
"enum": [
"array"
@@ -1350,7 +1429,7 @@
"type": "string"
},
"permissions": {
"$ref": "#/definitions/RequestPermissionProfile"
"$ref": "#/definitions/AdditionalPermissionProfile"
},
"reason": {
"type": [
@@ -1384,32 +1463,6 @@
}
]
},
"RequestPermissionProfile": {
"additionalProperties": false,
"properties": {
"fileSystem": {
"anyOf": [
{
"$ref": "#/definitions/AdditionalFileSystemPermissions"
},
{
"type": "null"
}
]
},
"network": {
"anyOf": [
{
"$ref": "#/definitions/AdditionalNetworkPermissions"
},
{
"type": "null"
}
]
}
},
"type": "object"
},
"ThreadId": {
"type": "string"
},

View File

@@ -31,7 +31,7 @@
"type": "boolean"
},
"optOutNotificationMethods": {
"description": "Exact notification method names that should be suppressed for this connection (for example `thread/started`).",
"description": "Exact notification method names that should be suppressed for this connection (for example `codex/event/session_configured`).",
"items": {
"type": "string"
},

View File

@@ -1,36 +1,11 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"properties": {
"codexHome": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute path to the server's $CODEX_HOME directory."
},
"platformFamily": {
"description": "Platform family for the running app-server target, for example `\"unix\"` or `\"windows\"`.",
"type": "string"
},
"platformOs": {
"description": "Operating system for the running app-server target, for example `\"macos\"`, `\"linux\"`, or `\"windows\"`.",
"type": "string"
},
"userAgent": {
"type": "string"
}
},
"required": [
"codexHome",
"platformFamily",
"platformOs",
"userAgent"
],
"title": "InitializeResponse",

View File

@@ -29,9 +29,7 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -34,9 +34,7 @@
"plus",
"pro",
"team",
"self_serve_business_usage_based",
"business",
"enterprise_cbp_usage_based",
"enterprise",
"edu",
"unknown"

View File

@@ -96,14 +96,6 @@
"AppToolsConfig": {
"type": "object"
},
"ApprovalsReviewer": {
"description": "Configures who approval requests are routed to for review. Examples include sandbox escapes, blocked network access, MCP approval prompts, and ARC escalations. Defaults to `user`. `guardian_subagent` uses a carefully prompted subagent to gather relevant context and apply a risk-based decision framework before approving or denying the request.",
"enum": [
"user",
"guardian_subagent"
],
"type": "string"
},
"AppsConfig": {
"properties": {
"_default": {
@@ -151,24 +143,16 @@
{
"additionalProperties": false,
"properties": {
"granular": {
"reject": {
"properties": {
"mcp_elicitations": {
"type": "boolean"
},
"request_permissions": {
"default": false,
"type": "boolean"
},
"rules": {
"type": "boolean"
},
"sandbox_approval": {
"type": "boolean"
},
"skill_approval": {
"default": false,
"type": "boolean"
}
},
"required": [
@@ -180,9 +164,9 @@
}
},
"required": [
"granular"
"reject"
],
"title": "GranularAskForApproval",
"title": "RejectAskForApproval",
"type": "object"
}
]
@@ -210,17 +194,6 @@
}
]
},
"approvals_reviewer": {
"anyOf": [
{
"$ref": "#/definitions/ApprovalsReviewer"
},
{
"type": "null"
}
],
"description": "[UNSTABLE] Optional default for where approval requests are routed for review."
},
"compact_prompt": {
"type": [
"string",
@@ -597,17 +570,6 @@
}
]
},
"approvals_reviewer": {
"anyOf": [
{
"$ref": "#/definitions/ApprovalsReviewer"
},
{
"type": "null"
}
],
"description": "[UNSTABLE] Optional profile-level override for where approval requests are routed for review. If omitted, the enclosing config default is used."
},
"chatgpt_base_url": {
"type": [
"string",

View File

@@ -15,24 +15,16 @@
{
"additionalProperties": false,
"properties": {
"granular": {
"reject": {
"properties": {
"mcp_elicitations": {
"type": "boolean"
},
"request_permissions": {
"default": false,
"type": "boolean"
},
"rules": {
"type": "boolean"
},
"sandbox_approval": {
"type": "boolean"
},
"skill_approval": {
"default": false,
"type": "boolean"
}
},
"required": [
@@ -44,9 +36,9 @@
}
},
"required": [
"granular"
"reject"
],
"title": "GranularAskForApproval",
"title": "RejectAskForApproval",
"type": "object"
}
]
@@ -102,13 +94,6 @@
},
"type": "object"
},
"NetworkDomainPermission": {
"enum": [
"allow",
"deny"
],
"type": "string"
},
"NetworkRequirements": {
"properties": {
"allowLocalBinding": {
@@ -118,7 +103,6 @@
]
},
"allowUnixSockets": {
"description": "Legacy compatibility view derived from `unix_sockets`.",
"items": {
"type": "string"
},
@@ -134,7 +118,6 @@
]
},
"allowedDomains": {
"description": "Legacy compatibility view derived from `domains`.",
"items": {
"type": "string"
},
@@ -156,7 +139,6 @@
]
},
"deniedDomains": {
"description": "Legacy compatibility view derived from `domains`.",
"items": {
"type": "string"
},
@@ -165,16 +147,6 @@
"null"
]
},
"domains": {
"additionalProperties": {
"$ref": "#/definitions/NetworkDomainPermission"
},
"description": "Canonical network permission map for `experimental_network`.",
"type": [
"object",
"null"
]
},
"enabled": {
"type": [
"boolean",
@@ -189,13 +161,6 @@
"null"
]
},
"managedAllowedDomainsOnly": {
"description": "When true, only managed allowlist entries are respected while managed network enforcement is active.",
"type": [
"boolean",
"null"
]
},
"socksPort": {
"format": "uint16",
"minimum": 0.0,
@@ -203,27 +168,10 @@
"integer",
"null"
]
},
"unixSockets": {
"additionalProperties": {
"$ref": "#/definitions/NetworkUnixSocketPermission"
},
"description": "Canonical unix socket permission map for `experimental_network`.",
"type": [
"object",
"null"
]
}
},
"type": "object"
},
"NetworkUnixSocketPermission": {
"enum": [
"allow",
"none"
],
"type": "string"
},
"ResidencyRequirement": {
"enum": [
"us"

View File

@@ -112,38 +112,9 @@
],
"title": "ResponseTooManyFailedAttemptsCodexErrorInfo",
"type": "object"
},
{
"additionalProperties": false,
"description": "Returned when `turn/start` or `turn/steer` is submitted while the current active turn cannot accept same-turn steering, for example `/review` or manual `/compact`.",
"properties": {
"activeTurnNotSteerable": {
"properties": {
"turnKind": {
"$ref": "#/definitions/NonSteerableTurnKind"
}
},
"required": [
"turnKind"
],
"type": "object"
}
},
"required": [
"activeTurnNotSteerable"
],
"title": "ActiveTurnNotSteerableCodexErrorInfo",
"type": "object"
}
]
},
"NonSteerableTurnKind": {
"enum": [
"review",
"compact"
],
"type": "string"
},
"TurnError": {
"properties": {
"additionalDetails": {

View File

@@ -1,17 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"enablement": {
"additionalProperties": {
"type": "boolean"
},
"description": "Process-wide runtime feature enablement keyed by canonical feature name.\n\nOnly named features are updated. Omitted features are left unchanged. Send an empty map for a no-op.",
"type": "object"
}
},
"required": [
"enablement"
],
"title": "ExperimentalFeatureEnablementSetParams",
"type": "object"
}

View File

@@ -1,17 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"properties": {
"enablement": {
"additionalProperties": {
"type": "boolean"
},
"description": "Feature enablement entries updated by this request.",
"type": "object"
}
},
"required": [
"enablement"
],
"title": "ExperimentalFeatureEnablementSetResponse",
"type": "object"
}

View File

@@ -1,29 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Filesystem watch notification emitted for `fs/watch` subscribers.",
"properties": {
"changedPaths": {
"description": "File or directory paths associated with this event.",
"items": {
"$ref": "#/definitions/AbsolutePathBuf"
},
"type": "array"
},
"watchId": {
"description": "Watch identifier returned by `fs/watch`.",
"type": "string"
}
},
"required": [
"changedPaths",
"watchId"
],
"title": "FsChangedNotification",
"type": "object"
}

View File

@@ -1,38 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Copy a file or directory tree on the host filesystem.",
"properties": {
"destinationPath": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute destination path."
},
"recursive": {
"description": "Required for directory copies; ignored for file copies.",
"type": "boolean"
},
"sourcePath": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute source path."
}
},
"required": [
"destinationPath",
"sourcePath"
],
"title": "FsCopyParams",
"type": "object"
}

View File

@@ -1,6 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Successful response for `fs/copy`.",
"title": "FsCopyResponse",
"type": "object"
}

View File

@@ -1,32 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Create a directory on the host filesystem.",
"properties": {
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute directory path to create."
},
"recursive": {
"description": "Whether parent directories should also be created. Defaults to `true`.",
"type": [
"boolean",
"null"
]
}
},
"required": [
"path"
],
"title": "FsCreateDirectoryParams",
"type": "object"
}

View File

@@ -1,6 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Successful response for `fs/createDirectory`.",
"title": "FsCreateDirectoryResponse",
"type": "object"
}

View File

@@ -1,25 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Request metadata for an absolute path.",
"properties": {
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute path to inspect."
}
},
"required": [
"path"
],
"title": "FsGetMetadataParams",
"type": "object"
}

View File

@@ -1,32 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Metadata returned by `fs/getMetadata`.",
"properties": {
"createdAtMs": {
"description": "File creation time in Unix milliseconds when available, otherwise `0`.",
"format": "int64",
"type": "integer"
},
"isDirectory": {
"description": "Whether the path currently resolves to a directory.",
"type": "boolean"
},
"isFile": {
"description": "Whether the path currently resolves to a regular file.",
"type": "boolean"
},
"modifiedAtMs": {
"description": "File modification time in Unix milliseconds when available, otherwise `0`.",
"format": "int64",
"type": "integer"
}
},
"required": [
"createdAtMs",
"isDirectory",
"isFile",
"modifiedAtMs"
],
"title": "FsGetMetadataResponse",
"type": "object"
}

View File

@@ -1,25 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "List direct child names for a directory.",
"properties": {
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute directory path to read."
}
},
"required": [
"path"
],
"title": "FsReadDirectoryParams",
"type": "object"
}

View File

@@ -1,43 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"FsReadDirectoryEntry": {
"description": "A directory entry returned by `fs/readDirectory`.",
"properties": {
"fileName": {
"description": "Direct child entry name only, not an absolute or relative path.",
"type": "string"
},
"isDirectory": {
"description": "Whether this entry resolves to a directory.",
"type": "boolean"
},
"isFile": {
"description": "Whether this entry resolves to a regular file.",
"type": "boolean"
}
},
"required": [
"fileName",
"isDirectory",
"isFile"
],
"type": "object"
}
},
"description": "Directory entries returned by `fs/readDirectory`.",
"properties": {
"entries": {
"description": "Direct child entries in the requested directory.",
"items": {
"$ref": "#/definitions/FsReadDirectoryEntry"
},
"type": "array"
}
},
"required": [
"entries"
],
"title": "FsReadDirectoryResponse",
"type": "object"
}

View File

@@ -1,25 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Read a file from the host filesystem.",
"properties": {
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute path to read."
}
},
"required": [
"path"
],
"title": "FsReadFileParams",
"type": "object"
}

View File

@@ -1,15 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Base64-encoded file contents returned by `fs/readFile`.",
"properties": {
"dataBase64": {
"description": "File contents encoded as base64.",
"type": "string"
}
},
"required": [
"dataBase64"
],
"title": "FsReadFileResponse",
"type": "object"
}

View File

@@ -1,39 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Remove a file or directory tree from the host filesystem.",
"properties": {
"force": {
"description": "Whether missing paths should be ignored. Defaults to `true`.",
"type": [
"boolean",
"null"
]
},
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute path to remove."
},
"recursive": {
"description": "Whether directory removal should recurse. Defaults to `true`.",
"type": [
"boolean",
"null"
]
}
},
"required": [
"path"
],
"title": "FsRemoveParams",
"type": "object"
}

View File

@@ -1,6 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Successful response for `fs/remove`.",
"title": "FsRemoveResponse",
"type": "object"
}

View File

@@ -1,15 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Stop filesystem watch notifications for a prior `fs/watch`.",
"properties": {
"watchId": {
"description": "Watch identifier returned by `fs/watch`.",
"type": "string"
}
},
"required": [
"watchId"
],
"title": "FsUnwatchParams",
"type": "object"
}

View File

@@ -1,6 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"description": "Successful response for `fs/unwatch`.",
"title": "FsUnwatchResponse",
"type": "object"
}

View File

@@ -1,25 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"definitions": {
"AbsolutePathBuf": {
"description": "A path that is guaranteed to be absolute and normalized (though it is not guaranteed to be canonicalized or exist on the filesystem).\n\nIMPORTANT: When deserializing an `AbsolutePathBuf`, a base path must be set using [AbsolutePathBufGuard::new]. If no base path is set, the deserialization will fail unless the path being deserialized is already absolute.",
"type": "string"
}
},
"description": "Start filesystem watch notifications for an absolute path.",
"properties": {
"path": {
"allOf": [
{
"$ref": "#/definitions/AbsolutePathBuf"
}
],
"description": "Absolute file or directory path to watch."
}
},
"required": [
"path"
],
"title": "FsWatchParams",
"type": "object"
}

Some files were not shown because too many files have changed in this diff Show More