Compare commits

..

29 Commits

Author SHA1 Message Date
Ahmed Ibrahim
f3e16de572 Update approval mode run expectations
Co-authored-by: Codex <noreply@openai.com>
2026-05-10 13:06:56 +03:00
Ahmed Ibrahim
70053fbe42 Preserve approval settings by default
Co-authored-by: Codex <noreply@openai.com>
2026-05-10 13:05:01 +03:00
Ahmed Ibrahim
3b0b5a58e1 Reduce approval mode test mocking
Replace fake sync and async client approval tests with direct serialization checks using generated TurnStartParams, while keeping existing run-path coverage for the default behavior.

Co-authored-by: Codex <noreply@openai.com>
2026-05-10 12:45:11 +03:00
Ahmed Ibrahim
934eda61c3 Make approval mode mapping exhaustive
Use an explicit match over ApprovalMode values while keeping a separate runtime validation error for non-enum inputs.

Co-authored-by: Codex <noreply@openai.com>
2026-05-10 12:36:52 +03:00
Ahmed Ibrahim
bfd11aa1fc Focus Python SDK approval mode
Default high-level thread and turn starts to auto-review, keep deny_all as the explicit opt-out, and remove the generated AskForApproval alias customization.

Co-authored-by: Codex <noreply@openai.com>
2026-05-10 12:06:14 +03:00
Ahmed Ibrahim
800aa1d6ba Add approval mode contract tests
Cover the exact public ApprovalMode values and ensure unsupported modes fail before sync or async high-level APIs can issue client requests.

Co-authored-by: Codex <noreply@openai.com>
2026-05-10 11:58:13 +03:00
Ahmed Ibrahim
ffe6e44a03 Add high-level Python SDK approval mode
Expose approval_mode with deny_all and auto_review options on the high-level Python SDK, and map those choices to generated app-server approval params internally.

Update examples, docs, notebooks, and public API tests to use the new mode instead of raw generated approval fields.

Co-authored-by: Codex <noreply@openai.com>
2026-05-10 11:44:34 +03:00
Ahmed Ibrahim
7edbdc555c Add approval callback TODO
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 13:49:35 +03:00
Ahmed Ibrahim
d80a43263f Default Python SDK approval policy to never
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 12:03:09 +03:00
Ahmed Ibrahim
78c0d5ca3d Rename Python SDK package to openai-codex
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 11:38:22 +03:00
Ahmed Ibrahim
9306e60848 Define Python SDK public type surface
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 11:34:46 +03:00
Ahmed Ibrahim
8d7a5c27c1 Keep Python SDK type exports
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:39:52 +03:00
Ahmed Ibrahim
692c08faf9 Narrow Python SDK root exports
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:35:44 +03:00
Ahmed Ibrahim
8b8e868140 Document Python SDK CI job
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:29 +03:00
Ahmed Ibrahim
2654cc299e Run Python SDK tests in CI
Add a separate Python SDK runner that installs the pinned musl runtime wheel in an Alpine Python container and runs the SDK pytest suite in parallel with existing SDK checks.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:17 +03:00
Ahmed Ibrahim
242ca6d8fd Document pinned schema generation helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:00 +03:00
Ahmed Ibrahim
b7635f4d77 Generate Python SDK types from pinned runtime
Make the SDK artifact generator fetch schema from the pinned runtime package, regenerate the checked-in Python types from that schema, and assert generated artifacts stay up to date.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:41 +03:00
Ahmed Ibrahim
c24694bdb0 Document Python runtime pinning helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:11 +03:00
Ahmed Ibrahim
6e10973c78 Pin Python SDK runtime dependency
Make the Python SDK declare its published runtime package dependency directly and resolve the runtime version from that pin instead of inferring it from the SDK package version.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:11 +03:00
Ahmed Ibrahim
becbd2a127 Document SDK turn routing helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:06 +03:00
Ahmed Ibrahim
11e31d7d38 Fix Python runtime wheel release args
Build the stage-runtime command as a single non-empty Bash array and append Linux resource binaries conditionally so macOS runners do not expand an empty optional array under set -u.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:03 +03:00
Ahmed Ibrahim
1d0023776f Build Python runtime wheels in virtualenvs
Avoid installing build into runner-managed Python environments when release jobs build runtime wheels.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:03 +03:00
Ahmed Ibrahim
9b54951688 Make Python runtime publish non-blocking
Allow the Rust release workflow to finish even if the new Python runtime PyPI publish job needs follow-up.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
3a3e1b477c Pin PyPI publish action to release tag commit
Use the v1.13.0 commit for the PyPI publish action so the pinned action reference has a clear release version.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
356c6797b8 Use PyPI environment for runtime publishing
Set the Python runtime publish job environment to match the PyPI trusted publisher configuration.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
bd14ac4758 Bundle Linux bwrap in Python runtime wheels
Pass the release bwrap binary into Linux runtime wheel staging so PyPI installs preserve sandbox fallback behavior.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
d764740e6f Explain Windows runtime wheel helper packaging
Document why the release workflow includes sandbox helper executables in Windows Python runtime wheels.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
29e1c96f72 Publish Python runtime wheels on release
Build platform-specific openai-codex-cli-bin wheels from signed release binaries and publish them to PyPI using trusted publishing.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
ebe75bb683 Route Python SDK turn notifications by ID (#21778)
## Why

The Python SDK previously protected the stdio transport with a single
active turn-consumer guard. That avoided competing reads from stdout,
but it also meant one `Codex`/`AsyncCodex` client could not stream
multiple active turns at the same time. Notifications could also arrive
before the caller received a `TurnHandle` and registered for streaming,
so the SDK needed an explicit routing layer instead of letting
individual API calls read directly from the shared transport.

## What Changed

- Added a private `MessageRouter` that owns per-request response queues,
per-turn notification queues, pending turn-notification replay, and
global notification delivery behind a single stdout reader thread.
- Generated typed notification routing metadata so turn IDs come from
known payload shapes instead of router-side attribute guessing, with
explicit fallback handling for unknown notification payloads.
- Updated sync and async turn streaming so `TurnHandle.stream()`/`run()`
and `stream_text()` consume only notifications for their own turn ID,
while `AsyncAppServerClient` no longer serializes all transport calls
behind one async lock.
- Cleared pending turn-notification buffers when unregistered turns
complete so never-consumed turn handles do not leave stale queues
behind.
- Removed the internal stream-until helper now that turn completion
waiting can register directly with routed turn notifications.
- Updated Python SDK docs and focused tests for concurrent transport
calls, interleaved turn routing, buffered early notifications, unknown
notification routing, async delegation, and routed turn completion
behavior.

## Validation

- `uv run --extra dev ruff format scripts/update_sdk_artifacts.py
src/codex_app_server/_message_router.py src/codex_app_server/client.py
src/codex_app_server/generated/notification_registry.py
tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `uv run --extra dev ruff check scripts/update_sdk_artifacts.py
src/codex_app_server/_message_router.py src/codex_app_server/client.py
src/codex_app_server/generated/notification_registry.py
tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `uv run --extra dev pytest tests/test_client_rpc_methods.py
tests/test_public_api_runtime_behavior.py
tests/test_async_client_behavior.py`
- `git diff --check`

---------

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 04:16:23 +00:00
65 changed files with 2635 additions and 983 deletions

View File

@@ -6,6 +6,39 @@ on:
pull_request: {}
jobs:
python-sdk:
runs-on:
group: codex-runners
labels: codex-linux-x64
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Test Python SDK
shell: bash
run: |
set -euo pipefail
# Run inside Alpine so dependency resolution exercises the pinned
# runtime wheel on the same Linux wheel family that CI installs.
docker run --rm \
--user "$(id -u):$(id -g)" \
-e HOME=/tmp/codex-python-sdk-home \
-e UV_LINK_MODE=copy \
-v "${GITHUB_WORKSPACE}:${GITHUB_WORKSPACE}" \
-w "${GITHUB_WORKSPACE}/sdk/python" \
python:3.12-alpine \
sh -euxc '
python -m venv /tmp/uv
/tmp/uv/bin/python -m pip install uv==0.11.3
/tmp/uv/bin/uv sync --extra dev --frozen
/tmp/uv/bin/uv run --extra dev pytest
'
sdks:
runs-on:
group: codex-runners

View File

@@ -114,7 +114,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.131.0-alpha.4"
version = "0.0.0"
# Track the edition for all workspace crates in one place. Individual
# crates can still override this value, but keeping it here means new
# crates created with `cargo new -w ...` automatically inherit the 2024

View File

@@ -1,6 +1,6 @@
# Codex CLI Runtime for Python SDK
Platform-specific runtime package consumed by the published `openai-codex-app-server-sdk`.
Platform-specific runtime package consumed by the published `openai-codex`.
This package is staged during release so the SDK can pin an exact Codex CLI
version without checking platform binaries into the repo.

View File

@@ -1,8 +1,12 @@
# Codex App Server Python SDK (Experimental)
# OpenAI Codex Python SDK (Experimental)
Experimental Python SDK for `codex app-server` JSON-RPC v2 over stdio, with a small default surface optimized for real scripts and apps.
The generated wire-model layer is currently sourced from the bundled v2 schema and exposed as Pydantic models with snake_case Python fields that serialize back to the app-servers camelCase wire format.
The generated wire-model layer is sourced from the pinned `openai-codex-cli-bin`
runtime package and exposed as Pydantic models with snake_case Python fields
that serialize back to the app-servers camelCase wire format.
The package root exports the ergonomic client API; public app-server value and
event types live in `openai_codex.types`.
## Install
@@ -21,7 +25,7 @@ automatically.
## Quickstart
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5")
@@ -68,10 +72,11 @@ notebook bootstrap the pinned runtime package automatically.
```bash
cd sdk/python
uv sync
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex-app-server-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
@@ -89,13 +94,13 @@ matrix is `macosx_11_0_arm64`, `macosx_10_9_x86_64`,
This supports the CI release flow:
- run `generate-types` before packaging
- stage `openai-codex-app-server-sdk` once with an exact `openai-codex-cli-bin==...` dependency
- stage `openai-codex` once with an exact `openai-codex-cli-bin==...` dependency
- stage `openai-codex-cli-bin` on each supported platform runner with the same pinned runtime version
- build and publish `openai-codex-cli-bin` as platform wheels only through PyPI trusted publishing; do not publish an sdist
## Compatibility and versioning
- Package: `openai-codex-app-server-sdk`
- Package: `openai-codex`
- Runtime package: `openai-codex-cli-bin`
- Python: `>=3.10`
- Target protocol: Codex `app-server` JSON-RPC v2
@@ -107,4 +112,4 @@ This supports the CI release flow:
- Use context managers (`with Codex() as codex:`) to ensure shutdown.
- Prefer `thread.run("...")` for the common case. Use `thread.turn(...)` when
you need streaming, steering, or interrupt control.
- For transient overload, use `codex_app_server.retry.retry_on_overload`.
- For transient overload, use `retry_on_overload` from the package root.

View File

@@ -18,7 +18,7 @@ import zipfile
from pathlib import Path
PACKAGE_NAME = "openai-codex-cli-bin"
SDK_PACKAGE_NAME = "openai-codex-app-server-sdk"
SDK_PACKAGE_NAME = "openai-codex"
REPO_SLUG = "openai/codex"
@@ -27,16 +27,22 @@ class RuntimeSetupError(RuntimeError):
def pinned_runtime_version() -> str:
source_version = _source_tree_project_version()
if source_version is not None:
return _normalized_package_version(source_version)
"""Return the exact runtime version pinned by the SDK package dependency."""
source_pin = _source_tree_runtime_dependency_version()
if source_pin is not None:
return _normalized_package_version(source_pin)
try:
return _normalized_package_version(importlib.metadata.version(SDK_PACKAGE_NAME))
installed_pin = _installed_sdk_runtime_dependency_version()
except importlib.metadata.PackageNotFoundError as exc:
raise RuntimeSetupError(
f"Unable to resolve {SDK_PACKAGE_NAME} version for runtime pinning."
f"Unable to resolve {SDK_PACKAGE_NAME} metadata for runtime pinning."
) from exc
if installed_pin is None:
raise RuntimeSetupError(
f"Unable to resolve {PACKAGE_NAME} dependency pin from {SDK_PACKAGE_NAME}."
)
return _normalized_package_version(installed_pin)
def ensure_runtime_package_installed(
@@ -399,20 +405,33 @@ def _release_tag(version: str) -> str:
return f"rust-v{_codex_release_version(version)}"
def _source_tree_project_version() -> str | None:
def _source_tree_runtime_dependency_version() -> str | None:
"""Read the runtime dependency pin when the SDK is running from a checkout."""
pyproject_path = Path(__file__).resolve().parent / "pyproject.toml"
if not pyproject_path.exists():
return None
match = re.search(
r'(?m)^version = "([^"]+)"$',
pyproject_path.read_text(encoding="utf-8"),
)
match = re.search(_runtime_dependency_pin_pattern(), pyproject_path.read_text())
if match is None:
return None
return match.group(1)
def _installed_sdk_runtime_dependency_version() -> str | None:
"""Read the runtime dependency pin from installed package metadata."""
requirements = importlib.metadata.requires(SDK_PACKAGE_NAME) or []
for requirement in requirements:
match = re.search(_runtime_dependency_pin_pattern(), requirement)
if match is not None:
return match.group(1)
return None
def _runtime_dependency_pin_pattern() -> str:
"""Match the exact runtime dependency pin in TOML and wheel metadata."""
return rf'{re.escape(PACKAGE_NAME)}\s*==\s*"?([^",;\s]+)"?'
__all__ = [
"PACKAGE_NAME",
"SDK_PACKAGE_NAME",

View File

@@ -1,21 +1,22 @@
# Codex App Server SDK — API Reference
# OpenAI Codex SDK — API Reference
Public surface of `codex_app_server` for app-server v2.
Public surface of `openai_codex` for app-server v2.
This SDK surface is experimental. The current implementation intentionally allows only one active turn consumer (`Thread.run()`, `TurnHandle.stream()`, or `TurnHandle.run()`) per client instance at a time.
This SDK surface is experimental. Turn streams are routed by turn ID so one client can consume multiple active turns concurrently.
Thread and turn starts expose `approval_mode`. `ApprovalMode.auto_review` is the default; use `ApprovalMode.deny_all` to deny escalated permissions.
## Package Entry
```python
from codex_app_server import (
from openai_codex import (
Codex,
AsyncCodex,
ApprovalMode,
RunResult,
Thread,
AsyncThread,
TurnHandle,
AsyncTurnHandle,
InitializeResponse,
Input,
InputItem,
TextInput,
@@ -23,14 +24,18 @@ from codex_app_server import (
LocalImageInput,
SkillInput,
MentionInput,
)
from openai_codex.types import (
InitializeResponse,
ThreadItem,
ThreadTokenUsage,
TurnStatus,
)
from codex_app_server.generated.v2_all import ThreadItem, ThreadTokenUsage
```
- Version: `codex_app_server.__version__`
- Version: `openai_codex.__version__`
- Requires Python >= 3.10
- Canonical generated app-server models live in `codex_app_server.generated.v2_all`
- Public app-server value and event types live in `openai_codex.types`
## Codex (sync)
@@ -42,10 +47,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> None`
- `thread_start(*, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_start(*, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> ThreadListResponse`
- `thread_resume(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Thread`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, sandbox=None) -> Thread`
- `thread_archive(thread_id: str) -> ThreadArchiveResponse`
- `thread_unarchive(thread_id: str) -> Thread`
- `models(*, include_hidden: bool = False) -> ModelListResponse`
@@ -77,10 +82,10 @@ Properties/methods:
- `metadata -> InitializeResponse`
- `close() -> Awaitable[None]`
- `thread_start(*, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_start(*, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_list(*, archived=None, cursor=None, cwd=None, limit=None, model_providers=None, sort_key=None, source_kinds=None) -> Awaitable[ThreadListResponse]`
- `thread_resume(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_policy=None, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_resume(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, model=None, model_provider=None, personality=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_fork(thread_id: str, *, approval_mode=ApprovalMode.auto_review, base_instructions=None, config=None, cwd=None, developer_instructions=None, ephemeral=None, model=None, model_provider=None, sandbox=None) -> Awaitable[AsyncThread]`
- `thread_archive(thread_id: str) -> Awaitable[ThreadArchiveResponse]`
- `thread_unarchive(thread_id: str) -> Awaitable[AsyncThread]`
- `models(*, include_hidden: bool = False) -> Awaitable[ModelListResponse]`
@@ -98,16 +103,16 @@ async with AsyncCodex() as codex:
### Thread
- `run(input: str | Input, *, approval_policy=None, approvals_reviewer=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_policy=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `run(input: str | Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> RunResult`
- `turn(input: Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> TurnHandle`
- `read(*, include_turns: bool = False) -> ThreadReadResponse`
- `set_name(name: str) -> ThreadSetNameResponse`
- `compact() -> ThreadCompactStartResponse`
### AsyncThread
- `run(input: str | Input, *, approval_policy=None, approvals_reviewer=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_policy=None, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `run(input: str | Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, service_tier=None, summary=None) -> Awaitable[RunResult]`
- `turn(input: Input, *, approval_mode=ApprovalMode.auto_review, cwd=None, effort=None, model=None, output_schema=None, personality=None, sandbox_policy=None, summary=None) -> Awaitable[AsyncTurnHandle]`
- `read(*, include_turns: bool = False) -> Awaitable[ThreadReadResponse]`
- `set_name(name: str) -> Awaitable[ThreadSetNameResponse]`
- `compact() -> Awaitable[ThreadCompactStartResponse]`
@@ -124,7 +129,7 @@ object with:
phase-less assistant message item.
Use `turn(...)` when you need low-level turn control (`stream()`, `steer()`,
`interrupt()`) or the canonical generated `Turn` from `TurnHandle.run()`.
`interrupt()`) or the public `Turn` model from `TurnHandle.run()`.
## TurnHandle / AsyncTurnHandle
@@ -133,24 +138,24 @@ Use `turn(...)` when you need low-level turn control (`stream()`, `steer()`,
- `steer(input: Input) -> TurnSteerResponse`
- `interrupt() -> TurnInterruptResponse`
- `stream() -> Iterator[Notification]`
- `run() -> codex_app_server.generated.v2_all.Turn`
- `run() -> openai_codex.types.Turn`
Behavior notes:
- `stream()` and `run()` are exclusive per client instance in the current experimental build
- starting a second turn consumer on the same `Codex` instance raises `RuntimeError`
- `stream()` and `run()` consume only notifications for their own turn ID
- one `Codex` instance can stream multiple active turns concurrently
### AsyncTurnHandle
- `steer(input: Input) -> Awaitable[TurnSteerResponse]`
- `interrupt() -> Awaitable[TurnInterruptResponse]`
- `stream() -> AsyncIterator[Notification]`
- `run() -> Awaitable[codex_app_server.generated.v2_all.Turn]`
- `run() -> Awaitable[openai_codex.types.Turn]`
Behavior notes:
- `stream()` and `run()` are exclusive per client instance in the current experimental build
- starting a second turn consumer on the same `AsyncCodex` instance raises `RuntimeError`
- `stream()` and `run()` consume only notifications for their own turn ID
- one `AsyncCodex` instance can stream multiple active turns concurrently
## Inputs
@@ -165,16 +170,14 @@ InputItem = TextInput | ImageInput | LocalImageInput | SkillInput | MentionInput
Input = list[InputItem] | InputItem
```
## Generated Models
## Public Types
The SDK wrappers return and accept canonical generated app-server models wherever possible:
The SDK wrappers return and accept public app-server models wherever possible:
```python
from codex_app_server.generated.v2_all import (
AskForApproval,
from openai_codex.types import (
ThreadReadResponse,
Turn,
TurnStartParams,
TurnStatus,
)
```
@@ -182,7 +185,7 @@ from codex_app_server.generated.v2_all import (
## Retry + errors
```python
from codex_app_server import (
from openai_codex import (
retry_on_overload,
JsonRpcError,
MethodNotFoundError,
@@ -198,7 +201,7 @@ from codex_app_server import (
## Example
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -8,7 +8,7 @@
## `run()` vs `stream()`
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the canonical generated app-server `Turn` model.
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the public app-server `Turn` model from `openai_codex.types`.
- `TurnHandle.stream()` / `AsyncTurnHandle.stream()` yields raw notifications (`Notification`) so you can react event-by-event.
Choose `run()` for most apps. Choose `stream()` for progress UIs, custom timeout logic, or custom parsing.
@@ -68,7 +68,7 @@ cd sdk/python
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex-app-server-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
@@ -99,5 +99,5 @@ Do not blindly retry all errors. For `InvalidParamsError` or `MethodNotFoundErro
- Starting a new thread for every prompt when you wanted continuity.
- Forgetting to `close()` (or not using context managers).
- Assuming `run()` returns extra SDK-only fields instead of the generated `Turn` model.
- Assuming `run()` returns extra SDK-only fields instead of the public `Turn` model.
- Mixing SDK input classes with raw dicts incorrectly.

View File

@@ -24,7 +24,7 @@ Requirements:
## 2) Run your first turn (sync)
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
server = codex.metadata.serverInfo
@@ -45,12 +45,12 @@ What happened:
- `thread.run("...")` started a turn, consumed events until completion, and returned the final assistant response plus collected items and usage.
- `result.final_response` is `None` when no final-answer or phase-less assistant message item completes for the turn.
- use `thread.turn(...)` when you need a `TurnHandle` for streaming, steering, interrupting, or turn IDs/status
- one client can have only one active turn consumer (`thread.run(...)`, `TurnHandle.stream()`, or `TurnHandle.run()`) at a time in the current experimental build
- one client can consume multiple active turns concurrently; turn streams are routed by turn ID
## 3) Continue the same thread (multi-turn)
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
@@ -69,7 +69,7 @@ initializes lazily, and context entry makes startup/shutdown explicit.
```python
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:
@@ -85,7 +85,7 @@ asyncio.run(main())
## 5) Resume an existing thread
```python
from codex_app_server import Codex
from openai_codex import Codex
THREAD_ID = "thr_123" # replace with a real id
@@ -95,12 +95,13 @@ with Codex() as codex:
print(result.final_response)
```
## 6) Generated models
## 6) Public app-server types
The convenience wrappers live at the package root, but the canonical app-server models live under:
The convenience wrappers live at the package root. Public app-server value and
event types live under:
```python
from codex_app_server.generated.v2_all import Turn, TurnStatus, ThreadReadResponse
from openai_codex.types import ThreadReadResponse, Turn, TurnStatus
```
## 7) Next stops

View File

@@ -15,7 +15,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:

View File

@@ -13,7 +13,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex
from openai_codex import Codex
with Codex(config=runtime_config()) as codex:
print("Server:", server_label(codex.metadata))

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import ensure_local_sdk_src, runtime_config, server_label
ensure_local_sdk_src()
from codex_app_server import Codex
from openai_codex import Codex
with Codex(config=runtime_config()) as codex:
print("server:", server_label(codex.metadata))

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
# Create an initial thread and turn so we have a real thread to resume.

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import ensure_local_sdk_src, runtime_config
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, ImageInput, TextInput
from openai_codex import AsyncCodex, ImageInput, TextInput
REMOTE_IMAGE_URL = "https://raw.githubusercontent.com/github/explore/main/topics/python/python.png"

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, ImageInput, TextInput
from openai_codex import Codex, ImageInput, TextInput
REMOTE_IMAGE_URL = "https://raw.githubusercontent.com/github/explore/main/topics/python/python.png"

View File

@@ -17,7 +17,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, LocalImageInput, TextInput
from openai_codex import AsyncCodex, LocalImageInput, TextInput
async def main() -> None:

View File

@@ -15,7 +15,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, LocalImageInput, TextInput
from openai_codex import Codex, LocalImageInput, TextInput
with temporary_sample_image_path() as image_path:
with Codex(config=runtime_config()) as codex:

View File

@@ -15,7 +15,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
print("Server:", server_label(codex.metadata))

View File

@@ -19,14 +19,14 @@ import random
from collections.abc import Awaitable, Callable
from typing import TypeVar
from codex_app_server import (
from openai_codex import (
AsyncCodex,
JsonRpcError,
ServerBusyError,
TextInput,
TurnStatus,
is_retryable_error,
)
from openai_codex.types import TurnStatus
ResultT = TypeVar("ResultT")

View File

@@ -14,14 +14,14 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import (
from openai_codex import (
Codex,
JsonRpcError,
ServerBusyError,
TextInput,
TurnStatus,
retry_on_overload,
)
from openai_codex.types import TurnStatus
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -11,9 +11,11 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
)

View File

@@ -9,9 +9,11 @@ from _bootstrap import ensure_local_sdk_src, runtime_config
ensure_local_sdk_src()
from codex_app_server import (
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
)

View File

@@ -17,12 +17,13 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
AskForApproval,
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
Personality,
ReasoningSummary,
TextInput,
)
OUTPUT_SCHEMA = {
@@ -44,7 +45,6 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
async def main() -> None:
@@ -53,7 +53,6 @@ async def main() -> None:
turn = await thread.turn(
TextInput(PROMPT),
approval_policy=APPROVAL_POLICY,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,

View File

@@ -15,12 +15,13 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import (
AskForApproval,
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
Personality,
ReasoningSummary,
TextInput,
)
OUTPUT_SCHEMA = {
@@ -42,14 +43,12 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
turn = thread.turn(
TextInput(PROMPT),
approval_policy=APPROVAL_POLICY,
output_schema=OUTPUT_SCHEMA,
personality=Personality.pragmatic,
summary=SUMMARY,

View File

@@ -11,14 +11,15 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
AskForApproval,
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
Personality,
ReasoningEffort,
ReasoningSummary,
SandboxPolicy,
TextInput,
)
REASONING_RANK = {
@@ -73,7 +74,6 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
async def main() -> None:
@@ -104,7 +104,6 @@ async def main() -> None:
second_turn = await thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_policy=APPROVAL_POLICY,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,

View File

@@ -9,14 +9,15 @@ from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn
ensure_local_sdk_src()
from codex_app_server import (
AskForApproval,
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
Personality,
ReasoningEffort,
ReasoningSummary,
SandboxPolicy,
TextInput,
)
REASONING_RANK = {
@@ -71,7 +72,6 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
with Codex(config=runtime_config()) as codex:
@@ -100,7 +100,6 @@ with Codex(config=runtime_config()) as codex:
second = thread.turn(
TextInput("Return JSON for a safe feature-flag rollout plan."),
approval_policy=APPROVAL_POLICY,
cwd=str(Path.cwd()),
effort=selected_effort,
model=selected_model.model,

View File

@@ -15,7 +15,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -13,7 +13,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -5,7 +5,8 @@ Each example folder contains runnable versions:
- `sync.py` (public sync surface: `Codex`)
- `async.py` (public async surface: `AsyncCodex`)
All examples intentionally use only public SDK exports from `codex_app_server`.
All examples intentionally use only public SDK exports from `openai_codex`
and `openai_codex.types`.
## Prerequisites
@@ -28,7 +29,7 @@ will download the matching GitHub release artifact, stage a temporary local
`openai-codex-cli-bin` package, install it into your active interpreter, and clean up
the temporary files afterward.
The pinned runtime version comes from the SDK package version.
The pinned runtime version comes from the SDK package dependency.
## Run examples

View File

@@ -35,7 +35,7 @@ def ensure_local_sdk_src() -> Path:
"""Add sdk/python/src to sys.path so examples run without installing the package."""
sdk_python_dir = _SDK_PYTHON_DIR
src_dir = sdk_python_dir / "src"
package_dir = src_dir / "codex_app_server"
package_dir = src_dir / "openai_codex"
if not package_dir.exists():
raise RuntimeError(f"Could not locate local SDK package at {package_dir}")
@@ -49,7 +49,7 @@ def ensure_local_sdk_src() -> Path:
def runtime_config():
"""Return an example-friendly AppServerConfig for repo-source SDK usage."""
from codex_app_server import AppServerConfig
from openai_codex import AppServerConfig
ensure_runtime_package_installed(sys.executable, _SDK_PYTHON_DIR)
return AppServerConfig()

View File

@@ -6,7 +6,7 @@
"source": [
"# Codex Python SDK Walkthrough\n",
"\n",
"Public SDK surface only (`codex_app_server` root exports)."
"Public SDK surface only (`openai_codex` root exports)."
]
},
{
@@ -32,7 +32,7 @@
"\n",
"\n",
"def _is_sdk_python_dir(path: Path) -> bool:\n",
" return (path / 'pyproject.toml').exists() and (path / 'src' / 'codex_app_server').exists()\n",
" return (path / 'pyproject.toml').exists() and (path / 'src' / 'openai_codex').exists()\n",
"\n",
"\n",
"def _iter_home_fallback_candidates(home: Path):\n",
@@ -114,7 +114,7 @@
"\n",
"# Force fresh imports after SDK upgrades in the same notebook kernel.\n",
"for module_name in list(sys.modules):\n",
" if module_name == 'codex_app_server' or module_name.startswith('codex_app_server.'):\n",
" if module_name == 'openai_codex' or module_name.startswith('openai_codex.'):\n",
" sys.modules.pop(module_name, None)\n",
"\n",
"print('Kernel:', sys.executable)\n",
@@ -130,7 +130,7 @@
"source": [
"# Cell 2: imports (public only)\n",
"from _bootstrap import assistant_text_from_turn, find_turn_by_id, server_label\n",
"from codex_app_server import (\n",
"from openai_codex import (\n",
" AsyncCodex,\n",
" Codex,\n",
" ImageInput,\n",
@@ -245,8 +245,7 @@
"source": [
"# Cell 5b: one turn with most optional turn params\n",
"from pathlib import Path\n",
"from codex_app_server import (\n",
" AskForApproval,\n",
"from openai_codex import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -270,7 +269,6 @@
" thread = codex.thread_start(model='gpt-5.4', config={'model_reasoning_effort': 'high'})\n",
" turn = thread.turn(\n",
" TextInput('Propose a safe production feature-flag rollout. Return JSON matching the schema.'),\n",
" approval_policy=AskForApproval.model_validate('never'),\n",
" cwd=str(Path.cwd()),\n",
" effort=ReasoningEffort.medium,\n",
" model='gpt-5.4',\n",
@@ -295,8 +293,7 @@
"source": [
"# Cell 5c: choose highest model + highest supported reasoning, then run turns\n",
"from pathlib import Path\n",
"from codex_app_server import (\n",
" AskForApproval,\n",
"from openai_codex import (\n",
" Personality,\n",
" ReasoningEffort,\n",
" ReasoningSummary,\n",
@@ -361,7 +358,6 @@
"\n",
" second = thread.turn(\n",
" TextInput('Return JSON for a safe feature-flag rollout plan.'),\n",
" approval_policy=AskForApproval.model_validate('never'),\n",
" cwd=str(Path.cwd()),\n",
" effort=selected_effort,\n",
" model=selected_model.model,\n",

View File

@@ -3,13 +3,13 @@ requires = ["hatchling>=1.24.0"]
build-backend = "hatchling.build"
[project]
name = "openai-codex-app-server-sdk"
version = "0.116.0a1"
name = "openai-codex"
version = "0.131.0a4"
description = "Python SDK for Codex app-server v2"
readme = "README.md"
requires-python = ">=3.10"
license = { text = "Apache-2.0" }
authors = [{ name = "OpenClaw Assistant" }]
authors = [{ name = "OpenAI" }]
keywords = ["codex", "json-rpc", "sdk", "llm", "app-server"]
classifiers = [
"Development Status :: 4 - Beta",
@@ -22,7 +22,7 @@ classifiers = [
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
]
dependencies = ["pydantic>=2.12"]
dependencies = ["pydantic>=2.12", "openai-codex-cli-bin==0.131.0a4"]
[project.urls]
Homepage = "https://github.com/openai/codex"
@@ -42,14 +42,14 @@ exclude = [
]
[tool.hatch.build.targets.wheel]
packages = ["src/codex_app_server"]
packages = ["src/openai_codex"]
include = [
"src/codex_app_server/py.typed",
"src/openai_codex/py.typed",
]
[tool.hatch.build.targets.sdist]
include = [
"src/codex_app_server/**",
"src/openai_codex/**",
"README.md",
"CHANGELOG.md",
"CONTRIBUTING.md",
@@ -63,8 +63,10 @@ testpaths = ["tests"]
[tool.uv]
exclude-newer = "7 days"
exclude-newer-package = { openai-codex-cli-bin = "2026-05-10T00:00:00Z" }
index-strategy = "first-index"
[tool.uv.pip]
exclude-newer = "7 days"
exclude-newer-package = { openai-codex-cli-bin = "2026-05-10T00:00:00Z" }
index-strategy = "first-index"

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import argparse
import importlib
import importlib.metadata
import json
import platform
import re
@@ -17,7 +18,7 @@ from dataclasses import dataclass
from pathlib import Path
from typing import Any, Callable, Sequence, get_args, get_origin
SDK_DISTRIBUTION_NAME = "openai-codex-app-server-sdk"
SDK_DISTRIBUTION_NAME = "openai-codex"
RUNTIME_DISTRIBUTION_NAME = "openai-codex-cli-bin"
@@ -33,19 +34,14 @@ def python_runtime_root() -> Path:
return repo_root() / "sdk" / "python-runtime"
def schema_bundle_path() -> Path:
return (
repo_root()
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
)
def sdk_pyproject_path() -> Path:
"""Return the SDK pyproject file that owns package pins and versions."""
return sdk_root() / "pyproject.toml"
def schema_root_dir() -> Path:
return repo_root() / "codex-rs" / "app-server-protocol" / "schema" / "json"
def schema_bundle_path(schema_dir: Path) -> Path:
"""Return the aggregate v2 schema bundle emitted by the runtime binary."""
return schema_dir / "codex_app_server_protocol.v2.schemas.json"
def _is_windows() -> bool:
@@ -61,6 +57,7 @@ def staged_runtime_bin_path(root: Path) -> Path:
def staged_runtime_resource_path(root: Path, resource: Path) -> Path:
"""Stage runtime helper binaries beside the main bundled Codex binary."""
# Runtime wheels include the whole bin/ directory, so helper executables
# should be staged beside the main Codex binary instead of changing the
# package template for each platform.
@@ -78,7 +75,7 @@ def run_python_module(module: str, args: list[str], cwd: Path) -> None:
def current_sdk_version() -> str:
match = re.search(
r'^version = "([^"]+)"$',
(sdk_root() / "pyproject.toml").read_text(),
sdk_pyproject_path().read_text(),
flags=re.MULTILINE,
)
if match is None:
@@ -86,6 +83,59 @@ def current_sdk_version() -> str:
return match.group(1)
def pinned_runtime_version() -> str:
"""Read the exact runtime package pin used for schema generation."""
pyproject_text = sdk_pyproject_path().read_text()
match = re.search(r"(?ms)^dependencies = \[(.*?)\]$", pyproject_text)
if match is None:
raise RuntimeError(
"Could not find dependencies array in sdk/python/pyproject.toml"
)
pins = re.findall(
rf'"{re.escape(RUNTIME_DISTRIBUTION_NAME)}==([^"]+)"',
match.group(1),
)
if len(pins) != 1:
raise RuntimeError(
f"Expected exactly one {RUNTIME_DISTRIBUTION_NAME} dependency pin "
"in sdk/python/pyproject.toml"
)
return normalize_codex_version(pins[0])
def pinned_runtime_codex_path() -> Path:
"""Return the bundled Codex binary from the installed pinned runtime wheel."""
expected_version = pinned_runtime_version()
try:
installed_version = importlib.metadata.version(RUNTIME_DISTRIBUTION_NAME)
except importlib.metadata.PackageNotFoundError as exc:
raise RuntimeError(
f"Install {RUNTIME_DISTRIBUTION_NAME}=={expected_version} before "
"generating Python SDK types."
) from exc
normalized_installed_version = normalize_codex_version(installed_version)
if normalized_installed_version != expected_version:
raise RuntimeError(
f"Expected {RUNTIME_DISTRIBUTION_NAME}=={expected_version}, "
f"but found {installed_version}."
)
try:
from codex_cli_bin import bundled_codex_path
except ImportError as exc:
raise RuntimeError(
f"Installed {RUNTIME_DISTRIBUTION_NAME} package does not expose "
"bundled_codex_path."
) from exc
codex_path = bundled_codex_path()
if not codex_path.exists():
raise RuntimeError(f"Pinned Codex runtime binary not found at {codex_path}.")
return codex_path
def normalize_codex_version(version: str) -> str:
normalized = version.strip()
if normalized.startswith("rust-v"):
@@ -200,7 +250,7 @@ def _rewrite_sdk_runtime_dependency(pyproject_text: str, runtime_version: str) -
def stage_python_sdk_package(staging_dir: Path, codex_version: str) -> Path:
package_version = normalize_codex_version(codex_version)
_copy_package_tree(sdk_root(), staging_dir)
sdk_bin_dir = staging_dir / "src" / "codex_app_server" / "bin"
sdk_bin_dir = staging_dir / "src" / "openai_codex" / "bin"
if sdk_bin_dir.exists():
shutil.rmtree(sdk_bin_dir)
@@ -487,8 +537,28 @@ def _annotate_schema(value: Any, base: str | None = None) -> None:
_annotate_schema(child, base)
def _normalized_schema_bundle_text() -> str:
schema = json.loads(schema_bundle_path().read_text())
def generate_schema_from_pinned_runtime(schema_dir: Path) -> Path:
"""Generate app-server schemas by invoking the installed pinned runtime binary."""
codex_path = pinned_runtime_codex_path()
if schema_dir.exists():
shutil.rmtree(schema_dir)
schema_dir.mkdir(parents=True)
run(
[
str(codex_path),
"app-server",
"generate-json-schema",
"--out",
str(schema_dir),
],
cwd=sdk_root(),
)
return schema_dir
def _normalized_schema_bundle_text(schema_dir: Path) -> str:
"""Normalize the schema bundle before feeding it to the Python type generator."""
schema = json.loads(schema_bundle_path(schema_dir).read_text())
definitions = schema.get("definitions", {})
if isinstance(definitions, dict):
for definition in definitions.values():
@@ -500,16 +570,17 @@ def _normalized_schema_bundle_text() -> str:
return json.dumps(schema, indent=2, sort_keys=True) + "\n"
def generate_v2_all() -> None:
out_path = sdk_root() / "src" / "codex_app_server" / "generated" / "v2_all.py"
def generate_v2_all(schema_dir: Path) -> None:
"""Regenerate the Pydantic v2 protocol model module from runtime schemas."""
out_path = sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
out_dir = out_path.parent
old_package_dir = out_dir / "v2_all"
if old_package_dir.exists():
shutil.rmtree(old_package_dir)
out_dir.mkdir(parents=True, exist_ok=True)
with tempfile.TemporaryDirectory() as td:
normalized_bundle = Path(td) / schema_bundle_path().name
normalized_bundle.write_text(_normalized_schema_bundle_text())
normalized_bundle = Path(td) / schema_bundle_path(schema_dir).name
normalized_bundle.write_text(_normalized_schema_bundle_text(schema_dir))
run_python_module(
"datamodel_code_generator",
[
@@ -546,13 +617,14 @@ def generate_v2_all() -> None:
_normalize_generated_timestamps(out_path)
def _notification_specs() -> list[tuple[str, str]]:
def _notification_specs(schema_dir: Path) -> list[tuple[str, str]]:
"""Map each server notification method to its generated payload model class."""
server_notifications = json.loads(
(schema_root_dir() / "ServerNotification.json").read_text()
(schema_dir / "ServerNotification.json").read_text()
)
one_of = server_notifications.get("oneOf", [])
generated_source = (
sdk_root() / "src" / "codex_app_server" / "generated" / "v2_all.py"
sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
).read_text()
specs: list[tuple[str, str]] = []
@@ -585,16 +657,61 @@ def _notification_specs() -> list[tuple[str, str]]:
return specs
def generate_notification_registry() -> None:
def _notification_turn_id_specs(
schema_dir: Path,
specs: list[tuple[str, str]],
) -> tuple[list[str], list[str]]:
"""Classify notification payloads by where their turn id is carried."""
server_notifications = json.loads(
(schema_dir / "ServerNotification.json").read_text()
)
definitions = server_notifications.get("definitions", {})
if not isinstance(definitions, dict):
return ([], [])
direct: list[str] = []
nested: list[str] = []
for _, class_name in specs:
definition = definitions.get(class_name)
if not isinstance(definition, dict):
continue
props = definition.get("properties", {})
if not isinstance(props, dict):
continue
if "turnId" in props:
direct.append(class_name)
continue
turn = props.get("turn")
if isinstance(turn, dict) and turn.get("$ref") == "#/definitions/Turn":
nested.append(class_name)
return (sorted(set(direct)), sorted(set(nested)))
def _type_tuple_source(class_names: list[str]) -> str:
"""Render a generated tuple literal for notification payload classes."""
if not class_names:
return "()"
if len(class_names) == 1:
return f"({class_names[0]},)"
return "(\n" + "".join(f" {class_name},\n" for class_name in class_names) + ")"
def generate_notification_registry(schema_dir: Path) -> None:
"""Regenerate notification dispatch metadata from the runtime notification schema."""
out = (
sdk_root()
/ "src"
/ "codex_app_server"
/ "openai_codex"
/ "generated"
/ "notification_registry.py"
)
specs = _notification_specs()
specs = _notification_specs(schema_dir)
class_names = sorted({class_name for _, class_name in specs})
direct_turn_id_types, nested_turn_types = _notification_turn_id_specs(
schema_dir,
specs,
)
lines = [
"# Auto-generated by scripts/update_sdk_artifacts.py",
@@ -616,7 +733,27 @@ def generate_notification_registry() -> None:
)
for method, class_name in specs:
lines.append(f' "{method}": {class_name},')
lines.extend(["}", ""])
lines.extend(
[
"}",
"",
"DIRECT_TURN_ID_NOTIFICATION_TYPES: tuple[type[BaseModel], ...] = "
f"{_type_tuple_source(direct_turn_id_types)}",
"",
"NESTED_TURN_NOTIFICATION_TYPES: tuple[type[BaseModel], ...] = "
f"{_type_tuple_source(nested_turn_types)}",
"",
"",
"def notification_turn_id(payload: BaseModel) -> str | None:",
' """Return the turn id carried by generated notification payload metadata."""',
" if isinstance(payload, DIRECT_TURN_ID_NOTIFICATION_TYPES):",
" return payload.turn_id if isinstance(payload.turn_id, str) else None",
" if isinstance(payload, NESTED_TURN_NOTIFICATION_TYPES):",
" return payload.turn.id",
" return None",
"",
]
)
out.write_text("\n".join(lines))
@@ -695,8 +832,12 @@ def _camel_to_snake(name: str) -> str:
def _load_public_fields(
module_name: str, class_name: str, *, exclude: set[str] | None = None
) -> list[PublicFieldSpec]:
"""Load generated model fields used to render the ergonomic public methods."""
exclude = exclude or set()
module = importlib.import_module(module_name)
if module_name == "openai_codex.generated.v2_all":
module = _load_generated_v2_all_module()
else:
module = importlib.import_module(module_name)
model = getattr(module, class_name)
fields: list[PublicFieldSpec] = []
for name, field in model.model_fields.items():
@@ -718,6 +859,20 @@ def _load_public_fields(
return fields
def _load_generated_v2_all_module() -> types.ModuleType:
"""Import the freshly generated v2_all module without importing package init."""
module_name = "_openai_codex_generated_v2_all_for_artifacts"
sys.modules.pop(module_name, None)
module_path = sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
spec = importlib.util.spec_from_file_location(module_name, module_path)
if spec is None or spec.loader is None:
raise RuntimeError(f"Failed to load generated module from {module_path}")
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
return module
def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
lines: list[str] = []
for field in fields:
@@ -726,6 +881,34 @@ def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
return lines
def _approval_mode_start_signature_lines() -> list[str]:
"""Return the approval mode kwarg for new threads."""
return [" approval_mode: ApprovalMode = ApprovalMode.auto_review,"]
def _approval_mode_override_signature_lines() -> list[str]:
"""Return the optional approval mode kwarg for override-style helpers."""
return [" approval_mode: ApprovalMode | None = None,"]
def _approval_mode_assignment_line(
helper_name: str, *, indent: str = " "
) -> str:
"""Return the local mapping from public mode to app-server params."""
return (
f"{indent}approval_policy, approvals_reviewer = "
f"{helper_name}(approval_mode)"
)
def _approval_mode_model_arg_lines(*, indent: str = " ") -> list[str]:
"""Return app-server approval params derived from ApprovalMode."""
return [
f"{indent}approval_policy=approval_policy,",
f"{indent}approvals_reviewer=approvals_reviewer,",
]
def _model_arg_lines(
fields: list[PublicFieldSpec], *, indent: str = " "
) -> list[str]:
@@ -753,9 +936,12 @@ def _render_codex_block(
" def thread_start(",
" self,",
" *,",
*_approval_mode_start_signature_lines(),
*_kw_signature_lines(thread_start_fields),
" ) -> Thread:",
_approval_mode_assignment_line("_approval_mode_settings"),
" params = ThreadStartParams(",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(thread_start_fields),
" )",
" started = self._client.thread_start(params)",
@@ -775,10 +961,13 @@ def _render_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(resume_fields),
" ) -> Thread:",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = ThreadResumeParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(resume_fields),
" )",
" resumed = self._client.thread_resume(thread_id, params)",
@@ -788,10 +977,13 @@ def _render_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(fork_fields),
" ) -> Thread:",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = ThreadForkParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(fork_fields),
" )",
" forked = self._client.thread_fork(thread_id, params)",
@@ -817,10 +1009,13 @@ def _render_async_codex_block(
" async def thread_start(",
" self,",
" *,",
*_approval_mode_start_signature_lines(),
*_kw_signature_lines(thread_start_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line("_approval_mode_settings"),
" params = ThreadStartParams(",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(thread_start_fields),
" )",
" started = await self._client.thread_start(params)",
@@ -841,11 +1036,14 @@ def _render_async_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(resume_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = ThreadResumeParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(resume_fields),
" )",
" resumed = await self._client.thread_resume(thread_id, params)",
@@ -855,11 +1053,14 @@ def _render_async_codex_block(
" self,",
" thread_id: str,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(fork_fields),
" ) -> AsyncThread:",
" await self._ensure_initialized()",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = ThreadForkParams(",
" thread_id=thread_id,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(fork_fields),
" )",
" forked = await self._client.thread_fork(thread_id, params)",
@@ -885,12 +1086,15 @@ def _render_thread_block(
" self,",
" input: Input,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(turn_fields),
" ) -> TurnHandle:",
" wire_input = _to_wire_input(input)",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = TurnStartParams(",
" thread_id=self.id,",
" input=wire_input,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(turn_fields),
" )",
" turn = self._client.turn_start(self.id, wire_input, params=params)",
@@ -907,13 +1111,16 @@ def _render_async_thread_block(
" self,",
" input: Input,",
" *,",
*_approval_mode_override_signature_lines(),
*_kw_signature_lines(turn_fields),
" ) -> AsyncTurnHandle:",
" await self._codex._ensure_initialized()",
" wire_input = _to_wire_input(input)",
_approval_mode_assignment_line("_approval_mode_override_settings"),
" params = TurnStartParams(",
" thread_id=self.id,",
" input=wire_input,",
*_approval_mode_model_arg_lines(),
*_model_arg_lines(turn_fields),
" )",
" turn = await self._codex._client.turn_start(",
@@ -927,8 +1134,9 @@ def _render_async_thread_block(
def generate_public_api_flat_methods() -> None:
"""Regenerate the public convenience methods from generated protocol models."""
src_dir = sdk_root() / "src"
public_api_path = src_dir / "codex_app_server" / "api.py"
public_api_path = src_dir / "openai_codex" / "api.py"
if not public_api_path.exists():
# PR2 can run codegen before the ergonomic public API layer is added.
return
@@ -936,28 +1144,30 @@ def generate_public_api_flat_methods() -> None:
if src_dir_str not in sys.path:
sys.path.insert(0, src_dir_str)
approval_fields = {"approval_policy", "approvals_reviewer"}
thread_start_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadStartParams",
exclude=approval_fields,
)
thread_list_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadListParams",
)
thread_resume_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadResumeParams",
exclude={"thread_id"},
exclude={"thread_id", *approval_fields},
)
thread_fork_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadForkParams",
exclude={"thread_id"},
exclude={"thread_id", *approval_fields},
)
turn_start_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"TurnStartParams",
exclude={"thread_id", "input"},
exclude={"thread_id", "input", *approval_fields},
)
source = public_api_path.read_text()
@@ -992,13 +1202,22 @@ def generate_public_api_flat_methods() -> None:
_render_async_thread_block(turn_start_fields),
)
public_api_path.write_text(source)
run_python_module("ruff", ["format", str(public_api_path)], cwd=sdk_root())
def generate_types_from_schema_dir(schema_dir: Path) -> None:
"""Regenerate every SDK artifact derived from an existing schema directory."""
# v2_all is the authoritative generated surface.
generate_v2_all(schema_dir)
generate_notification_registry(schema_dir)
generate_public_api_flat_methods()
def generate_types() -> None:
# v2_all is the authoritative generated surface.
generate_v2_all()
generate_notification_registry()
generate_public_api_flat_methods()
"""Generate schemas from the pinned runtime and then refresh SDK artifacts."""
with tempfile.TemporaryDirectory(prefix="codex-python-schema-") as td:
schema_dir = generate_schema_from_pinned_runtime(Path(td) / "schema")
generate_types_from_schema_dir(schema_dir)
def build_parser() -> argparse.ArgumentParser:

View File

@@ -1,5 +1,4 @@
from .async_client import AsyncAppServerClient
from .client import AppServerClient, AppServerConfig
from .client import AppServerConfig
from .errors import (
AppServerError,
AppServerRpcError,
@@ -14,30 +13,8 @@ from .errors import (
TransportClosedError,
is_retryable_error,
)
from .generated.v2_all import (
AskForApproval,
Personality,
PlanType,
ReasoningEffort,
ReasoningSummary,
SandboxMode,
SandboxPolicy,
ServiceTier,
ThreadItem,
ThreadForkParams,
ThreadListParams,
ThreadResumeParams,
ThreadSortKey,
ThreadSourceKind,
ThreadStartParams,
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
TurnStartParams,
TurnStatus,
TurnSteerParams,
)
from .models import InitializeResponse
from .api import (
ApprovalMode,
AsyncCodex,
AsyncThread,
AsyncTurnHandle,
@@ -58,16 +35,14 @@ from ._version import __version__
__all__ = [
"__version__",
"AppServerClient",
"AsyncAppServerClient",
"AppServerConfig",
"Codex",
"AsyncCodex",
"ApprovalMode",
"Thread",
"AsyncThread",
"TurnHandle",
"AsyncTurnHandle",
"InitializeResponse",
"RunResult",
"Input",
"InputItem",
@@ -76,26 +51,6 @@ __all__ = [
"LocalImageInput",
"SkillInput",
"MentionInput",
"ThreadItem",
"ThreadTokenUsageUpdatedNotification",
"TurnCompletedNotification",
"AskForApproval",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"ServiceTier",
"ThreadStartParams",
"ThreadResumeParams",
"ThreadListParams",
"ThreadSortKey",
"ThreadSourceKind",
"ThreadForkParams",
"TurnStatus",
"TurnStartParams",
"TurnSteerParams",
"retry_on_overload",
"AppServerError",
"TransportClosedError",

View File

@@ -0,0 +1,160 @@
from __future__ import annotations
import queue
import threading
from collections import deque
from .errors import AppServerError, map_jsonrpc_error
from .generated.notification_registry import notification_turn_id
from .models import JsonValue, Notification, UnknownNotification
ResponseQueueItem = JsonValue | BaseException
NotificationQueueItem = Notification | BaseException
class MessageRouter:
"""Route reader-thread messages to the SDK operation waiting for them.
The app-server stdio transport is a single ordered stream, so only the
reader thread should consume stdout. This router keeps the rest of the SDK
from competing for that stream by giving each in-flight JSON-RPC request
and active turn stream its own queue.
"""
def __init__(self) -> None:
"""Create empty response, turn, and global notification queues."""
self._lock = threading.Lock()
self._response_waiters: dict[str, queue.Queue[ResponseQueueItem]] = {}
self._turn_notifications: dict[str, queue.Queue[NotificationQueueItem]] = {}
self._pending_turn_notifications: dict[str, deque[Notification]] = {}
self._global_notifications: queue.Queue[NotificationQueueItem] = queue.Queue()
def create_response_waiter(self, request_id: str) -> queue.Queue[ResponseQueueItem]:
"""Register a one-shot queue for a JSON-RPC response id."""
waiter: queue.Queue[ResponseQueueItem] = queue.Queue(maxsize=1)
with self._lock:
self._response_waiters[request_id] = waiter
return waiter
def discard_response_waiter(self, request_id: str) -> None:
"""Remove a response waiter when the request could not be written."""
with self._lock:
self._response_waiters.pop(request_id, None)
def next_global_notification(self) -> Notification:
"""Block until the next notification that is not scoped to a turn."""
item = self._global_notifications.get()
if isinstance(item, BaseException):
raise item
return item
def register_turn(self, turn_id: str) -> None:
"""Register a queue for a turn stream and replay early events."""
turn_queue: queue.Queue[NotificationQueueItem] = queue.Queue()
with self._lock:
if turn_id in self._turn_notifications:
return
# A turn can emit events immediately after turn/start, before the
# caller receives the TurnHandle and starts streaming.
pending = self._pending_turn_notifications.pop(turn_id, deque())
self._turn_notifications[turn_id] = turn_queue
for notification in pending:
turn_queue.put(notification)
def unregister_turn(self, turn_id: str) -> None:
"""Stop routing future turn events to the stream queue."""
with self._lock:
self._turn_notifications.pop(turn_id, None)
def next_turn_notification(self, turn_id: str) -> Notification:
"""Block until the next notification for a registered turn."""
with self._lock:
turn_queue = self._turn_notifications.get(turn_id)
if turn_queue is None:
raise RuntimeError(f"turn {turn_id!r} is not registered for streaming")
item = turn_queue.get()
if isinstance(item, BaseException):
raise item
return item
def route_response(self, msg: dict[str, JsonValue]) -> None:
"""Deliver a JSON-RPC response or error to its request waiter."""
request_id = msg.get("id")
with self._lock:
waiter = self._response_waiters.pop(str(request_id), None)
if waiter is None:
return
if "error" in msg:
err = msg["error"]
if isinstance(err, dict):
waiter.put(
map_jsonrpc_error(
int(err.get("code", -32000)),
str(err.get("message", "unknown")),
err.get("data"),
)
)
else:
waiter.put(AppServerError("Malformed JSON-RPC error response"))
return
waiter.put(msg.get("result"))
def route_notification(self, notification: Notification) -> None:
"""Deliver a notification to a turn queue or the global queue."""
turn_id = self._notification_turn_id(notification)
if turn_id is None:
self._global_notifications.put(notification)
return
with self._lock:
turn_queue = self._turn_notifications.get(turn_id)
if turn_queue is None:
if notification.method == "turn/completed":
self._pending_turn_notifications.pop(turn_id, None)
return
self._pending_turn_notifications.setdefault(turn_id, deque()).append(
notification
)
return
turn_queue.put(notification)
def fail_all(self, exc: BaseException) -> None:
"""Wake every blocked waiter when the reader thread exits."""
with self._lock:
response_waiters = list(self._response_waiters.values())
self._response_waiters.clear()
turn_queues = list(self._turn_notifications.values())
self._pending_turn_notifications.clear()
# Put the same transport failure into every queue so no SDK call blocks
# forever waiting for a response that cannot arrive.
for waiter in response_waiters:
waiter.put(exc)
for turn_queue in turn_queues:
turn_queue.put(exc)
self._global_notifications.put(exc)
def _notification_turn_id(self, notification: Notification) -> str | None:
"""Extract routing ids from known generated payloads or raw unknown payloads."""
payload = notification.payload
if isinstance(payload, UnknownNotification):
raw_turn_id = payload.params.get("turnId")
if isinstance(raw_turn_id, str):
return raw_turn_id
raw_turn = payload.params.get("turn")
if isinstance(raw_turn, dict):
raw_nested_turn_id = raw_turn.get("id")
if isinstance(raw_nested_turn_id, str):
return raw_nested_turn_id
return None
return notification_turn_id(payload)

View File

@@ -5,7 +5,7 @@ from importlib.metadata import PackageNotFoundError
from importlib.metadata import version as distribution_version
from pathlib import Path
DISTRIBUTION_NAME = "openai-codex-app-server-sdk"
DISTRIBUTION_NAME = "openai-codex"
UNKNOWN_VERSION = "0+unknown"

View File

@@ -2,20 +2,21 @@ from __future__ import annotations
import asyncio
from dataclasses import dataclass
from typing import AsyncIterator, Iterator
from enum import Enum
from typing import AsyncIterator, Iterator, NoReturn
from .async_client import AsyncAppServerClient
from .client import AppServerClient, AppServerConfig
from .generated.v2_all import (
ApprovalsReviewer,
AskForApproval,
AskForApprovalValue,
ModelListResponse,
Personality,
ReasoningEffort,
ReasoningSummary,
SandboxMode,
SandboxPolicy,
ServiceTier,
SortDirection,
ThreadArchiveResponse,
ThreadCompactStartResponse,
@@ -27,6 +28,7 @@ from .generated.v2_all import (
ThreadResumeParams,
ThreadSetNameResponse,
ThreadSortKey,
ThreadSource,
ThreadSourceKind,
ThreadStartSource,
ThreadStartParams,
@@ -38,14 +40,14 @@ from .generated.v2_all import (
)
from .models import InitializeResponse, JsonObject, Notification, ServerInfo
from ._inputs import (
ImageInput,
ImageInput as ImageInput,
Input,
InputItem,
LocalImageInput,
MentionInput,
InputItem as InputItem,
LocalImageInput as LocalImageInput,
MentionInput as MentionInput,
RunInput,
SkillInput,
TextInput,
SkillInput as SkillInput,
TextInput as TextInput,
_normalize_run_input,
_to_wire_input,
)
@@ -69,6 +71,47 @@ def _split_user_agent(user_agent: str) -> tuple[str | None, str | None]:
return raw, None
class ApprovalMode(str, Enum):
"""High-level approval behavior for escalated permission requests."""
deny_all = "deny_all"
auto_review = "auto_review"
def _approval_mode_settings(
approval_mode: ApprovalMode,
) -> tuple[AskForApproval, ApprovalsReviewer | None]:
"""Map the public approval mode to generated app-server start params."""
if not isinstance(approval_mode, ApprovalMode):
supported = ", ".join(mode.value for mode in ApprovalMode)
raise ValueError(f"approval_mode must be one of: {supported}")
match approval_mode:
case ApprovalMode.auto_review:
return (
AskForApproval(root=AskForApprovalValue.on_request),
ApprovalsReviewer.auto_review,
)
case ApprovalMode.deny_all:
return AskForApproval(root=AskForApprovalValue.never), None
case _:
return _assert_never_approval_mode(approval_mode)
def _assert_never_approval_mode(approval_mode: NoReturn) -> NoReturn:
"""Make approval mode mapping exhaustive for static type checkers."""
raise AssertionError(f"Unhandled approval mode: {approval_mode!r}")
def _approval_mode_override_settings(
approval_mode: ApprovalMode | None,
) -> tuple[AskForApproval | None, ApprovalsReviewer | None]:
"""Map an optional public approval mode to app-server override params."""
if approval_mode is None:
return None, None
return _approval_mode_settings(approval_mode)
class Codex:
"""Minimal typed SDK surface for app-server v2."""
@@ -140,8 +183,7 @@ class Codex:
def thread_start(
self,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -152,9 +194,11 @@ class Codex:
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_name: str | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
session_start_source: ThreadStartSource | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadStartParams(
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
@@ -170,6 +214,7 @@ class Codex:
service_name=service_name,
service_tier=service_tier,
session_start_source=session_start_source,
thread_source=thread_source,
)
started = self._client.thread_start(params)
return Thread(self._client, started.thread.id)
@@ -206,8 +251,7 @@ class Codex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -216,8 +260,11 @@ class Codex:
model_provider: str | None = None,
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=approval_policy,
@@ -239,8 +286,7 @@ class Codex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -249,8 +295,12 @@ class Codex:
model: str | None = None,
model_provider: str | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=approval_policy,
@@ -264,6 +314,7 @@ class Codex:
model_provider=model_provider,
sandbox=sandbox,
service_tier=service_tier,
thread_source=thread_source,
)
forked = self._client.thread_fork(thread_id, params)
return Thread(self._client, forked.thread.id)
@@ -274,6 +325,7 @@ class Codex:
def thread_unarchive(self, thread_id: str) -> Thread:
unarchived = self._client.thread_unarchive(thread_id)
return Thread(self._client, unarchived.thread.id)
# END GENERATED: Codex.flat_methods
def models(self, *, include_hidden: bool = False) -> ModelListResponse:
@@ -336,8 +388,7 @@ class AsyncCodex:
async def thread_start(
self,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode = ApprovalMode.auto_review,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -348,10 +399,12 @@ class AsyncCodex:
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_name: str | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
session_start_source: ThreadStartSource | None = None,
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_settings(approval_mode)
params = ThreadStartParams(
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
@@ -367,6 +420,7 @@ class AsyncCodex:
service_name=service_name,
service_tier=service_tier,
session_start_source=session_start_source,
thread_source=thread_source,
)
started = await self._client.thread_start(params)
return AsyncThread(self, started.thread.id)
@@ -404,8 +458,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -414,9 +467,12 @@ class AsyncCodex:
model_provider: str | None = None,
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=approval_policy,
@@ -438,8 +494,7 @@ class AsyncCodex:
self,
thread_id: str,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
base_instructions: str | None = None,
config: JsonObject | None = None,
cwd: str | None = None,
@@ -448,9 +503,13 @@ class AsyncCodex:
model: str | None = None,
model_provider: str | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=approval_policy,
@@ -464,6 +523,7 @@ class AsyncCodex:
model_provider=model_provider,
sandbox=sandbox,
service_tier=service_tier,
thread_source=thread_source,
)
forked = await self._client.thread_fork(thread_id, params)
return AsyncThread(self, forked.thread.id)
@@ -476,6 +536,7 @@ class AsyncCodex:
await self._ensure_initialized()
unarchived = await self._client.thread_unarchive(thread_id)
return AsyncThread(self, unarchived.thread.id)
# END GENERATED: AsyncCodex.flat_methods
async def models(self, *, include_hidden: bool = False) -> ModelListResponse:
@@ -492,21 +553,19 @@ class Thread:
self,
input: RunInput,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> RunResult:
turn = self.turn(
_normalize_run_input(input),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
approval_mode=approval_mode,
cwd=cwd,
effort=effort,
model=model,
@@ -527,18 +586,20 @@ class Thread:
self,
input: Input,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> TurnHandle:
wire_input = _to_wire_input(input)
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
@@ -555,6 +616,7 @@ class Thread:
)
turn = self._client.turn_start(self.id, wire_input, params=params)
return TurnHandle(self._client, self.id, turn.turn.id)
# END GENERATED: Thread.flat_methods
def read(self, *, include_turns: bool = False) -> ThreadReadResponse:
@@ -576,21 +638,19 @@ class AsyncThread:
self,
input: RunInput,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> RunResult:
turn = await self.turn(
_normalize_run_input(input),
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
approval_mode=approval_mode,
cwd=cwd,
effort=effort,
model=model,
@@ -611,19 +671,21 @@ class AsyncThread:
self,
input: Input,
*,
approval_policy: AskForApproval | None = None,
approvals_reviewer: ApprovalsReviewer | None = None,
approval_mode: ApprovalMode | None = None,
cwd: str | None = None,
effort: ReasoningEffort | None = None,
model: str | None = None,
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> AsyncTurnHandle:
await self._codex._ensure_initialized()
wire_input = _to_wire_input(input)
approval_policy, approvals_reviewer = _approval_mode_override_settings(
approval_mode
)
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
@@ -644,6 +706,7 @@ class AsyncThread:
params=params,
)
return AsyncTurnHandle(self._codex, self.id, turn.turn.id)
# END GENERATED: AsyncThread.flat_methods
async def read(self, *, include_turns: bool = False) -> ThreadReadResponse:
@@ -674,11 +737,11 @@ class TurnHandle:
return self._client.turn_interrupt(self.thread_id, self.id)
def stream(self) -> Iterator[Notification]:
# TODO: replace this client-wide experimental guard with per-turn event demux.
self._client.acquire_turn_consumer(self.id)
"""Yield only notifications routed to this turn handle."""
self._client.register_turn_notifications(self.id)
try:
while True:
event = self._client.next_notification()
event = self._client.next_turn_notification(self.id)
yield event
if (
event.method == "turn/completed"
@@ -687,7 +750,7 @@ class TurnHandle:
):
break
finally:
self._client.release_turn_consumer(self.id)
self._client.unregister_turn_notifications(self.id)
def run(self) -> AppServerTurn:
completed: TurnCompletedNotification | None = None
@@ -727,12 +790,12 @@ class AsyncTurnHandle:
return await self._codex._client.turn_interrupt(self.thread_id, self.id)
async def stream(self) -> AsyncIterator[Notification]:
"""Yield only notifications routed to this async turn handle."""
await self._codex._ensure_initialized()
# TODO: replace this client-wide experimental guard with per-turn event demux.
self._codex._client.acquire_turn_consumer(self.id)
self._codex._client.register_turn_notifications(self.id)
try:
while True:
event = await self._codex._client.next_notification()
event = await self._codex._client.next_turn_notification(self.id)
yield event
if (
event.method == "turn/completed"
@@ -741,7 +804,7 @@ class AsyncTurnHandle:
):
break
finally:
self._codex._client.release_turn_consumer(self.id)
self._codex._client.unregister_turn_notifications(self.id)
async def run(self) -> AppServerTurn:
completed: TurnCompletedNotification | None = None

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Iterator
from typing import AsyncIterator, Callable, Iterable, ParamSpec, TypeVar
from typing import AsyncIterator, Callable, ParamSpec, TypeVar
from pydantic import BaseModel
@@ -40,15 +40,16 @@ class AsyncAppServerClient:
"""Async wrapper around AppServerClient using thread offloading."""
def __init__(self, config: AppServerConfig | None = None) -> None:
"""Create the wrapped sync client that owns the transport process."""
self._sync = AppServerClient(config=config)
# Single stdio transport cannot be read safely from multiple threads.
self._transport_lock = asyncio.Lock()
async def __aenter__(self) -> "AsyncAppServerClient":
"""Start the app-server process when entering an async context."""
await self.start()
return self
async def __aexit__(self, _exc_type, _exc, _tb) -> None:
"""Close the app-server process when leaving an async context."""
await self.close()
async def _call_sync(
@@ -58,32 +59,38 @@ class AsyncAppServerClient:
*args: ParamsT.args,
**kwargs: ParamsT.kwargs,
) -> ReturnT:
async with self._transport_lock:
return await asyncio.to_thread(fn, *args, **kwargs)
"""Run a blocking sync-client operation without blocking the event loop."""
return await asyncio.to_thread(fn, *args, **kwargs)
@staticmethod
def _next_from_iterator(
iterator: Iterator[AgentMessageDeltaNotification],
) -> tuple[bool, AgentMessageDeltaNotification | None]:
"""Convert StopIteration into a value that can cross asyncio.to_thread."""
try:
return True, next(iterator)
except StopIteration:
return False, None
async def start(self) -> None:
"""Start the wrapped sync client in a worker thread."""
await self._call_sync(self._sync.start)
async def close(self) -> None:
"""Close the wrapped sync client in a worker thread."""
await self._call_sync(self._sync.close)
async def initialize(self) -> InitializeResponse:
"""Initialize the app-server session."""
return await self._call_sync(self._sync.initialize)
def acquire_turn_consumer(self, turn_id: str) -> None:
self._sync.acquire_turn_consumer(turn_id)
def register_turn_notifications(self, turn_id: str) -> None:
"""Register a turn notification queue on the wrapped sync client."""
self._sync.register_turn_notifications(turn_id)
def release_turn_consumer(self, turn_id: str) -> None:
self._sync.release_turn_consumer(turn_id)
def unregister_turn_notifications(self, turn_id: str) -> None:
"""Unregister a turn notification queue on the wrapped sync client."""
self._sync.unregister_turn_notifications(turn_id)
async def request(
self,
@@ -92,6 +99,7 @@ class AsyncAppServerClient:
*,
response_model: type[ModelT],
) -> ModelT:
"""Send a typed JSON-RPC request through the wrapped sync client."""
return await self._call_sync(
self._sync.request,
method,
@@ -99,7 +107,10 @@ class AsyncAppServerClient:
response_model=response_model,
)
async def thread_start(self, params: V2ThreadStartParams | JsonObject | None = None) -> ThreadStartResponse:
async def thread_start(
self, params: V2ThreadStartParams | JsonObject | None = None
) -> ThreadStartResponse:
"""Start a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_start, params)
async def thread_resume(
@@ -107,12 +118,19 @@ class AsyncAppServerClient:
thread_id: str,
params: V2ThreadResumeParams | JsonObject | None = None,
) -> ThreadResumeResponse:
"""Resume a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_resume, thread_id, params)
async def thread_list(self, params: V2ThreadListParams | JsonObject | None = None) -> ThreadListResponse:
async def thread_list(
self, params: V2ThreadListParams | JsonObject | None = None
) -> ThreadListResponse:
"""List threads using the wrapped sync client."""
return await self._call_sync(self._sync.thread_list, params)
async def thread_read(self, thread_id: str, include_turns: bool = False) -> ThreadReadResponse:
async def thread_read(
self, thread_id: str, include_turns: bool = False
) -> ThreadReadResponse:
"""Read a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_read, thread_id, include_turns)
async def thread_fork(
@@ -120,18 +138,23 @@ class AsyncAppServerClient:
thread_id: str,
params: V2ThreadForkParams | JsonObject | None = None,
) -> ThreadForkResponse:
"""Fork a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_fork, thread_id, params)
async def thread_archive(self, thread_id: str) -> ThreadArchiveResponse:
"""Archive a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_archive, thread_id)
async def thread_unarchive(self, thread_id: str) -> ThreadUnarchiveResponse:
"""Unarchive a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_unarchive, thread_id)
async def thread_set_name(self, thread_id: str, name: str) -> ThreadSetNameResponse:
"""Rename a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_set_name, thread_id, name)
async def thread_compact(self, thread_id: str) -> ThreadCompactStartResponse:
"""Start thread compaction using the wrapped sync client."""
return await self._call_sync(self._sync.thread_compact, thread_id)
async def turn_start(
@@ -140,9 +163,15 @@ class AsyncAppServerClient:
input_items: list[JsonObject] | JsonObject | str,
params: V2TurnStartParams | JsonObject | None = None,
) -> TurnStartResponse:
return await self._call_sync(self._sync.turn_start, thread_id, input_items, params)
"""Start a turn using the wrapped sync client."""
return await self._call_sync(
self._sync.turn_start, thread_id, input_items, params
)
async def turn_interrupt(self, thread_id: str, turn_id: str) -> TurnInterruptResponse:
async def turn_interrupt(
self, thread_id: str, turn_id: str
) -> TurnInterruptResponse:
"""Interrupt a turn using the wrapped sync client."""
return await self._call_sync(self._sync.turn_interrupt, thread_id, turn_id)
async def turn_steer(
@@ -151,6 +180,7 @@ class AsyncAppServerClient:
expected_turn_id: str,
input_items: list[JsonObject] | JsonObject | str,
) -> TurnSteerResponse:
"""Send steering input to a turn using the wrapped sync client."""
return await self._call_sync(
self._sync.turn_steer,
thread_id,
@@ -159,6 +189,7 @@ class AsyncAppServerClient:
)
async def model_list(self, include_hidden: bool = False) -> ModelListResponse:
"""List models using the wrapped sync client."""
return await self._call_sync(self._sync.model_list, include_hidden)
async def request_with_retry_on_overload(
@@ -171,6 +202,7 @@ class AsyncAppServerClient:
initial_delay_s: float = 0.25,
max_delay_s: float = 2.0,
) -> ModelT:
"""Send a typed request with the sync client's overload retry policy."""
return await self._call_sync(
self._sync.request_with_retry_on_overload,
method,
@@ -182,13 +214,16 @@ class AsyncAppServerClient:
)
async def next_notification(self) -> Notification:
"""Wait for the next global notification without blocking the event loop."""
return await self._call_sync(self._sync.next_notification)
async def wait_for_turn_completed(self, turn_id: str) -> TurnCompletedNotification:
return await self._call_sync(self._sync.wait_for_turn_completed, turn_id)
async def next_turn_notification(self, turn_id: str) -> Notification:
"""Wait for the next notification routed to one turn."""
return await self._call_sync(self._sync.next_turn_notification, turn_id)
async def stream_until_methods(self, methods: Iterable[str] | str) -> list[Notification]:
return await self._call_sync(self._sync.stream_until_methods, methods)
async def wait_for_turn_completed(self, turn_id: str) -> TurnCompletedNotification:
"""Wait for the completion notification routed to one turn."""
return await self._call_sync(self._sync.wait_for_turn_completed, turn_id)
async def stream_text(
self,
@@ -196,13 +231,13 @@ class AsyncAppServerClient:
text: str,
params: V2TurnStartParams | JsonObject | None = None,
) -> AsyncIterator[AgentMessageDeltaNotification]:
async with self._transport_lock:
iterator = self._sync.stream_text(thread_id, text, params)
while True:
has_value, chunk = await asyncio.to_thread(
self._next_from_iterator,
iterator,
)
if not has_value:
break
yield chunk
"""Stream text deltas from one turn without monopolizing the event loop."""
iterator = self._sync.stream_text(thread_id, text, params)
while True:
has_value, chunk = await asyncio.to_thread(
self._next_from_iterator,
iterator,
)
if not has_value:
break
yield chunk

View File

@@ -8,11 +8,11 @@ import uuid
from collections import deque
from dataclasses import dataclass
from pathlib import Path
from typing import Callable, Iterable, Iterator, TypeVar
from typing import Callable, Iterator, TypeVar
from pydantic import BaseModel
from .errors import AppServerError, TransportClosedError, map_jsonrpc_error
from .errors import AppServerError, TransportClosedError
from .generated.notification_registry import NOTIFICATION_MODELS
from .generated.v2_all import (
AgentMessageDeltaNotification,
@@ -43,6 +43,7 @@ from .models import (
Notification,
UnknownNotification,
)
from ._message_router import MessageRouter
from .retry import retry_on_overload
from ._version import __version__ as SDK_VERSION
@@ -75,7 +76,9 @@ def _params_dict(
return dumped
if isinstance(params, dict):
return params
raise TypeError(f"Expected generated params model or dict, got {type(params).__name__}")
raise TypeError(
f"Expected generated params model or dict, got {type(params).__name__}"
)
def _installed_codex_path() -> Path:
@@ -146,11 +149,10 @@ class AppServerClient:
self._approval_handler = approval_handler or self._default_approval_handler
self._proc: subprocess.Popen[str] | None = None
self._lock = threading.Lock()
self._turn_consumer_lock = threading.Lock()
self._active_turn_consumer: str | None = None
self._pending_notifications: deque[Notification] = deque()
self._router = MessageRouter()
self._stderr_lines: deque[str] = deque(maxlen=400)
self._stderr_thread: threading.Thread | None = None
self._reader_thread: threading.Thread | None = None
def __enter__(self) -> "AppServerClient":
self.start()
@@ -189,13 +191,13 @@ class AppServerClient:
)
self._start_stderr_drain_thread()
self._start_reader_thread()
def close(self) -> None:
if self._proc is None:
return
proc = self._proc
self._proc = None
self._active_turn_consumer = None
if proc.stdin:
proc.stdin.close()
@@ -207,6 +209,8 @@ class AppServerClient:
if self._stderr_thread and self._stderr_thread.is_alive():
self._stderr_thread.join(timeout=0.5)
if self._reader_thread and self._reader_thread.is_alive():
self._reader_thread.join(timeout=0.5)
def initialize(self) -> InitializeResponse:
result = self.request(
@@ -239,71 +243,49 @@ class AppServerClient:
return response_model.model_validate(result)
def _request_raw(self, method: str, params: JsonObject | None = None) -> JsonValue:
"""Send a JSON-RPC request and wait for the reader thread to route its response."""
request_id = str(uuid.uuid4())
self._write_message({"id": request_id, "method": method, "params": params or {}})
waiter = self._router.create_response_waiter(request_id)
while True:
msg = self._read_message()
try:
self._write_message(
{"id": request_id, "method": method, "params": params or {}}
)
except BaseException:
self._router.discard_response_waiter(request_id)
raise
if "method" in msg and "id" in msg:
response = self._handle_server_request(msg)
self._write_message({"id": msg["id"], "result": response})
continue
if "method" in msg and "id" not in msg:
self._pending_notifications.append(
self._coerce_notification(msg["method"], msg.get("params"))
)
continue
if msg.get("id") != request_id:
continue
if "error" in msg:
err = msg["error"]
if isinstance(err, dict):
raise map_jsonrpc_error(
int(err.get("code", -32000)),
str(err.get("message", "unknown")),
err.get("data"),
)
raise AppServerError("Malformed JSON-RPC error response")
return msg.get("result")
item = waiter.get()
if isinstance(item, BaseException):
raise item
return item
def notify(self, method: str, params: JsonObject | None = None) -> None:
"""Send a JSON-RPC notification without waiting for a response."""
self._write_message({"method": method, "params": params or {}})
def next_notification(self) -> Notification:
if self._pending_notifications:
return self._pending_notifications.popleft()
"""Return the next notification that is not scoped to an active turn."""
return self._router.next_global_notification()
while True:
msg = self._read_message()
if "method" in msg and "id" in msg:
response = self._handle_server_request(msg)
self._write_message({"id": msg["id"], "result": response})
continue
if "method" in msg and "id" not in msg:
return self._coerce_notification(msg["method"], msg.get("params"))
def register_turn_notifications(self, turn_id: str) -> None:
"""Start routing notifications for one turn into its dedicated queue."""
self._router.register_turn(turn_id)
def acquire_turn_consumer(self, turn_id: str) -> None:
with self._turn_consumer_lock:
if self._active_turn_consumer is not None:
raise RuntimeError(
"Concurrent turn consumers are not yet supported in the experimental SDK. "
f"Client is already streaming turn {self._active_turn_consumer!r}; "
f"cannot start turn {turn_id!r} until the active consumer finishes."
)
self._active_turn_consumer = turn_id
def unregister_turn_notifications(self, turn_id: str) -> None:
"""Stop routing notifications for one turn into its dedicated queue."""
self._router.unregister_turn(turn_id)
def release_turn_consumer(self, turn_id: str) -> None:
with self._turn_consumer_lock:
if self._active_turn_consumer == turn_id:
self._active_turn_consumer = None
def next_turn_notification(self, turn_id: str) -> Notification:
"""Return the next routed notification for the requested turn id."""
return self._router.next_turn_notification(turn_id)
def thread_start(self, params: V2ThreadStartParams | JsonObject | None = None) -> ThreadStartResponse:
return self.request("thread/start", _params_dict(params), response_model=ThreadStartResponse)
def thread_start(
self, params: V2ThreadStartParams | JsonObject | None = None
) -> ThreadStartResponse:
return self.request(
"thread/start", _params_dict(params), response_model=ThreadStartResponse
)
def thread_resume(
self,
@@ -311,12 +293,20 @@ class AppServerClient:
params: V2ThreadResumeParams | JsonObject | None = None,
) -> ThreadResumeResponse:
payload = {"threadId": thread_id, **_params_dict(params)}
return self.request("thread/resume", payload, response_model=ThreadResumeResponse)
return self.request(
"thread/resume", payload, response_model=ThreadResumeResponse
)
def thread_list(self, params: V2ThreadListParams | JsonObject | None = None) -> ThreadListResponse:
return self.request("thread/list", _params_dict(params), response_model=ThreadListResponse)
def thread_list(
self, params: V2ThreadListParams | JsonObject | None = None
) -> ThreadListResponse:
return self.request(
"thread/list", _params_dict(params), response_model=ThreadListResponse
)
def thread_read(self, thread_id: str, include_turns: bool = False) -> ThreadReadResponse:
def thread_read(
self, thread_id: str, include_turns: bool = False
) -> ThreadReadResponse:
return self.request(
"thread/read",
{"threadId": thread_id, "includeTurns": include_turns},
@@ -332,10 +322,18 @@ class AppServerClient:
return self.request("thread/fork", payload, response_model=ThreadForkResponse)
def thread_archive(self, thread_id: str) -> ThreadArchiveResponse:
return self.request("thread/archive", {"threadId": thread_id}, response_model=ThreadArchiveResponse)
return self.request(
"thread/archive",
{"threadId": thread_id},
response_model=ThreadArchiveResponse,
)
def thread_unarchive(self, thread_id: str) -> ThreadUnarchiveResponse:
return self.request("thread/unarchive", {"threadId": thread_id}, response_model=ThreadUnarchiveResponse)
return self.request(
"thread/unarchive",
{"threadId": thread_id},
response_model=ThreadUnarchiveResponse,
)
def thread_set_name(self, thread_id: str, name: str) -> ThreadSetNameResponse:
return self.request(
@@ -357,12 +355,15 @@ class AppServerClient:
input_items: list[JsonObject] | JsonObject | str,
params: V2TurnStartParams | JsonObject | None = None,
) -> TurnStartResponse:
"""Start a turn and register its notification queue as early as possible."""
payload = {
**_params_dict(params),
"threadId": thread_id,
"input": self._normalize_input_items(input_items),
}
return self.request("turn/start", payload, response_model=TurnStartResponse)
started = self.request("turn/start", payload, response_model=TurnStartResponse)
self.register_turn_notifications(started.turn.id)
return started
def turn_interrupt(self, thread_id: str, turn_id: str) -> TurnInterruptResponse:
return self.request(
@@ -412,23 +413,19 @@ class AppServerClient:
)
def wait_for_turn_completed(self, turn_id: str) -> TurnCompletedNotification:
while True:
notification = self.next_notification()
if (
notification.method == "turn/completed"
and isinstance(notification.payload, TurnCompletedNotification)
and notification.payload.turn.id == turn_id
):
return notification.payload
def stream_until_methods(self, methods: Iterable[str] | str) -> list[Notification]:
target_methods = {methods} if isinstance(methods, str) else set(methods)
out: list[Notification] = []
while True:
notification = self.next_notification()
out.append(notification)
if notification.method in target_methods:
return out
"""Block on the routed turn stream until the matching completion arrives."""
self.register_turn_notifications(turn_id)
try:
while True:
notification = self.next_turn_notification(turn_id)
if (
notification.method == "turn/completed"
and isinstance(notification.payload, TurnCompletedNotification)
and notification.payload.turn.id == turn_id
):
return notification.payload
finally:
self.unregister_turn_notifications(turn_id)
def stream_text(
self,
@@ -436,35 +433,44 @@ class AppServerClient:
text: str,
params: V2TurnStartParams | JsonObject | None = None,
) -> Iterator[AgentMessageDeltaNotification]:
"""Start a text turn and yield only its agent-message delta payloads."""
started = self.turn_start(thread_id, text, params=params)
turn_id = started.turn.id
while True:
notification = self.next_notification()
if (
notification.method == "item/agentMessage/delta"
and isinstance(notification.payload, AgentMessageDeltaNotification)
and notification.payload.turn_id == turn_id
):
yield notification.payload
continue
if (
notification.method == "turn/completed"
and isinstance(notification.payload, TurnCompletedNotification)
and notification.payload.turn.id == turn_id
):
break
self.register_turn_notifications(turn_id)
try:
while True:
notification = self.next_turn_notification(turn_id)
if (
notification.method == "item/agentMessage/delta"
and isinstance(notification.payload, AgentMessageDeltaNotification)
and notification.payload.turn_id == turn_id
):
yield notification.payload
continue
if (
notification.method == "turn/completed"
and isinstance(notification.payload, TurnCompletedNotification)
and notification.payload.turn.id == turn_id
):
break
finally:
self.unregister_turn_notifications(turn_id)
def _coerce_notification(self, method: str, params: object) -> Notification:
params_dict = params if isinstance(params, dict) else {}
model = NOTIFICATION_MODELS.get(method)
if model is None:
return Notification(method=method, payload=UnknownNotification(params=params_dict))
return Notification(
method=method, payload=UnknownNotification(params=params_dict)
)
try:
payload = model.model_validate(params_dict)
except Exception: # noqa: BLE001
return Notification(method=method, payload=UnknownNotification(params=params_dict))
return Notification(
method=method, payload=UnknownNotification(params=params_dict)
)
return Notification(method=method, payload=payload)
def _normalize_input_items(
@@ -477,7 +483,10 @@ class AppServerClient:
return [input_items]
return input_items
def _default_approval_handler(self, method: str, params: JsonObject | None) -> JsonObject:
def _default_approval_handler(
self, method: str, params: JsonObject | None
) -> JsonObject:
"""Accept approval requests when the caller did not provide a handler."""
if method == "item/commandExecution/requestApproval":
return {"decision": "accept"}
if method == "item/fileChange/requestApproval":
@@ -498,6 +507,34 @@ class AppServerClient:
self._stderr_thread = threading.Thread(target=_drain, daemon=True)
self._stderr_thread.start()
def _start_reader_thread(self) -> None:
"""Start the sole stdout reader that fans messages into router queues."""
if self._proc is None or self._proc.stdout is None:
return
self._reader_thread = threading.Thread(target=self._reader_loop, daemon=True)
self._reader_thread.start()
def _reader_loop(self) -> None:
"""Continuously classify transport messages into requests, responses, and events."""
try:
while True:
msg = self._read_message()
if "method" in msg and "id" in msg:
response = self._handle_server_request(msg)
self._write_message({"id": msg["id"], "result": response})
continue
if "method" in msg and "id" not in msg:
method = msg["method"]
if isinstance(method, str):
self._router.route_notification(
self._coerce_notification(method, msg.get("params"))
)
continue
self._router.route_response(msg)
except BaseException as exc:
self._router.fail_all(exc)
def _stderr_tail(self, limit: int = 40) -> str:
return "\n".join(list(self._stderr_lines)[-limit:])

View File

@@ -35,6 +35,8 @@ from .v2_all import McpToolCallProgressNotification
from .v2_all import ModelReroutedNotification
from .v2_all import ModelVerificationNotification
from .v2_all import PlanDeltaNotification
from .v2_all import ProcessExitedNotification
from .v2_all import ProcessOutputDeltaNotification
from .v2_all import ReasoningSummaryPartAddedNotification
from .v2_all import ReasoningSummaryTextDeltaNotification
from .v2_all import ReasoningTextDeltaNotification
@@ -101,6 +103,8 @@ NOTIFICATION_MODELS: dict[str, type[BaseModel]] = {
"mcpServer/startupStatus/updated": McpServerStatusUpdatedNotification,
"model/rerouted": ModelReroutedNotification,
"model/verification": ModelVerificationNotification,
"process/exited": ProcessExitedNotification,
"process/outputDelta": ProcessOutputDeltaNotification,
"remoteControl/status/changed": RemoteControlStatusChangedNotification,
"serverRequest/resolved": ServerRequestResolvedNotification,
"skills/changed": SkillsChangedNotification,
@@ -130,3 +134,44 @@ NOTIFICATION_MODELS: dict[str, type[BaseModel]] = {
"windows/worldWritableWarning": WindowsWorldWritableWarningNotification,
"windowsSandbox/setupCompleted": WindowsSandboxSetupCompletedNotification,
}
DIRECT_TURN_ID_NOTIFICATION_TYPES: tuple[type[BaseModel], ...] = (
AgentMessageDeltaNotification,
CommandExecutionOutputDeltaNotification,
ContextCompactedNotification,
ErrorNotification,
FileChangeOutputDeltaNotification,
FileChangePatchUpdatedNotification,
HookCompletedNotification,
HookStartedNotification,
ItemCompletedNotification,
ItemGuardianApprovalReviewCompletedNotification,
ItemGuardianApprovalReviewStartedNotification,
ItemStartedNotification,
McpToolCallProgressNotification,
ModelReroutedNotification,
ModelVerificationNotification,
PlanDeltaNotification,
ReasoningSummaryPartAddedNotification,
ReasoningSummaryTextDeltaNotification,
ReasoningTextDeltaNotification,
TerminalInteractionNotification,
ThreadGoalUpdatedNotification,
ThreadTokenUsageUpdatedNotification,
TurnDiffUpdatedNotification,
TurnPlanUpdatedNotification,
)
NESTED_TURN_NOTIFICATION_TYPES: tuple[type[BaseModel], ...] = (
TurnCompletedNotification,
TurnStartedNotification,
)
def notification_turn_id(payload: BaseModel) -> str | None:
"""Return the turn id carried by generated notification payload metadata."""
if isinstance(payload, DIRECT_TURN_ID_NOTIFICATION_TYPES):
return payload.turn_id if isinstance(payload.turn_id, str) else None
if isinstance(payload, NESTED_TURN_NOTIFICATION_TYPES):
return payload.turn.id
return None

View File

@@ -0,0 +1,69 @@
"""Public generated app-server model exports for type annotations and matching."""
from __future__ import annotations
from .generated.v2_all import (
ApprovalsReviewer,
AskForApproval,
ModelListResponse,
Personality,
PlanType,
ReasoningEffort,
ReasoningSummary,
SandboxMode,
SandboxPolicy,
SortDirection,
ThreadArchiveResponse,
ThreadCompactStartResponse,
ThreadItem,
ThreadListCwdFilter,
ThreadListResponse,
ThreadReadResponse,
ThreadSetNameResponse,
ThreadSortKey,
ThreadSource,
ThreadSourceKind,
ThreadStartSource,
ThreadTokenUsage,
ThreadTokenUsageUpdatedNotification,
Turn,
TurnCompletedNotification,
TurnInterruptResponse,
TurnStatus,
TurnSteerResponse,
)
from .models import InitializeResponse, JsonObject, Notification
__all__ = [
"ApprovalsReviewer",
"AskForApproval",
"InitializeResponse",
"JsonObject",
"ModelListResponse",
"Notification",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"SortDirection",
"ThreadArchiveResponse",
"ThreadCompactStartResponse",
"ThreadItem",
"ThreadListCwdFilter",
"ThreadListResponse",
"ThreadReadResponse",
"ThreadSetNameResponse",
"ThreadSortKey",
"ThreadSource",
"ThreadSourceKind",
"ThreadStartSource",
"ThreadTokenUsage",
"ThreadTokenUsageUpdatedNotification",
"Turn",
"TurnCompletedNotification",
"TurnInterruptResponse",
"TurnStatus",
"TurnSteerResponse",
]

View File

@@ -12,5 +12,5 @@ if src_str in sys.path:
sys.path.insert(0, src_str)
for module_name in list(sys.modules):
if module_name == "codex_app_server" or module_name.startswith("codex_app_server."):
if module_name == "openai_codex" or module_name.startswith("openai_codex."):
sys.modules.pop(module_name)

View File

@@ -16,6 +16,7 @@ ROOT = Path(__file__).resolve().parents[1]
def _load_update_script_module():
"""Load the maintenance script as a module so tests exercise real helpers."""
script_path = ROOT / "scripts" / "update_sdk_artifacts.py"
spec = importlib.util.spec_from_file_location("update_sdk_artifacts", script_path)
if spec is None or spec.loader is None:
@@ -27,6 +28,7 @@ def _load_update_script_module():
def _load_runtime_setup_module():
"""Load runtime setup without importing the SDK package under test."""
runtime_setup_path = ROOT / "_runtime_setup.py"
spec = importlib.util.spec_from_file_location("_runtime_setup", runtime_setup_path)
if spec is None or spec.loader is None:
@@ -40,11 +42,13 @@ def _load_runtime_setup_module():
def test_generation_has_single_maintenance_entrypoint_script() -> None:
"""Keep artifact workflows routed through one script instead of side entrypoints."""
scripts = sorted(p.name for p in (ROOT / "scripts").glob("*.py"))
assert scripts == ["update_sdk_artifacts.py"]
def test_generate_types_wires_all_generation_steps() -> None:
"""The type generation command should refresh every schema-derived artifact."""
source = (ROOT / "scripts" / "update_sdk_artifacts.py").read_text()
tree = ast.parse(source)
@@ -52,7 +56,8 @@ def test_generate_types_wires_all_generation_steps() -> None:
(
node
for node in tree.body
if isinstance(node, ast.FunctionDef) and node.name == "generate_types"
if isinstance(node, ast.FunctionDef)
and node.name == "generate_types_from_schema_dir"
),
None,
)
@@ -72,19 +77,19 @@ def test_generate_types_wires_all_generation_steps() -> None:
]
def test_schema_normalization_only_flattens_string_literal_oneofs() -> None:
def _load_runtime_schema_bundle(tmp_path: Path) -> dict:
"""Ask the pinned runtime package for a real schema bundle used by tests."""
script = _load_update_script_module()
schema = json.loads(
(
ROOT.parent.parent
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
).read_text()
)
schema_dir = script.generate_schema_from_pinned_runtime(tmp_path / "schema")
return json.loads(script.schema_bundle_path(schema_dir).read_text())
def test_schema_normalization_only_flattens_string_literal_oneofs(
tmp_path: Path,
) -> None:
"""Schema normalization should only flatten the enum-shaped oneOf variants."""
script = _load_update_script_module()
schema = _load_runtime_schema_bundle(tmp_path)
definitions = schema["definitions"]
flattened = [
name
@@ -94,27 +99,23 @@ def test_schema_normalization_only_flattens_string_literal_oneofs() -> None:
]
assert flattened == [
"AuthMode",
"CommandExecOutputStream",
"ExperimentalFeatureStage",
"InputModality",
"MessagePhase",
"TurnItemsView",
"PluginAvailability",
"AuthMode",
"InputModality",
"ExperimentalFeatureStage",
"CommandExecOutputStream",
"ProcessOutputStream",
]
def test_python_codegen_schema_annotation_adds_stable_variant_titles() -> None:
def test_python_codegen_schema_annotation_adds_stable_variant_titles(
tmp_path: Path,
) -> None:
"""Schema annotations should give generated protocol classes stable names."""
script = _load_update_script_module()
schema = json.loads(
(
ROOT.parent.parent
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
).read_text()
)
schema = _load_runtime_schema_bundle(tmp_path)
script._annotate_schema(schema)
definitions = schema["definitions"]
@@ -163,19 +164,20 @@ def test_runtime_package_template_has_no_checked_in_binaries() -> None:
def test_examples_readme_points_to_runtime_version_source_of_truth() -> None:
"""Document that examples should point at the dependency pin, not release lore."""
readme = (ROOT / "examples" / "README.md").read_text()
assert "The pinned runtime version comes from the SDK package version." in readme
assert "The pinned runtime version comes from the SDK package dependency." in readme
def test_runtime_distribution_name_is_consistent() -> None:
script = _load_update_script_module()
runtime_setup = _load_runtime_setup_module()
from codex_app_server import client as client_module
from codex_app_server import _version
from openai_codex import client as client_module
from openai_codex import _version
assert script.SDK_DISTRIBUTION_NAME == "openai-codex-app-server-sdk"
assert runtime_setup.SDK_PACKAGE_NAME == "openai-codex-app-server-sdk"
assert _version.DISTRIBUTION_NAME == "openai-codex-app-server-sdk"
assert script.SDK_DISTRIBUTION_NAME == "openai-codex"
assert runtime_setup.SDK_PACKAGE_NAME == "openai-codex"
assert _version.DISTRIBUTION_NAME == "openai-codex"
assert script.RUNTIME_DISTRIBUTION_NAME == "openai-codex-cli-bin"
assert runtime_setup.PACKAGE_NAME == "openai-codex-cli-bin"
assert client_module.RUNTIME_PKG_NAME == "openai-codex-cli-bin"
@@ -185,6 +187,25 @@ def test_runtime_distribution_name_is_consistent() -> None:
)
def test_source_sdk_package_pins_published_runtime() -> None:
"""The source package metadata should pin the runtime wheel that ships schemas."""
script = _load_update_script_module()
pyproject = tomllib.loads((ROOT / "pyproject.toml").read_text())
assert {
"sdk_version": pyproject["project"]["version"],
"runtime_pin": script.pinned_runtime_version(),
"dependencies": pyproject["project"]["dependencies"],
} == {
"sdk_version": "0.131.0a4",
"runtime_pin": "0.131.0a4",
"dependencies": [
"pydantic>=2.12",
"openai-codex-cli-bin==0.131.0a4",
],
}
def test_release_metadata_retries_without_invalid_auth(
monkeypatch: pytest.MonkeyPatch,
) -> None:
@@ -212,11 +233,16 @@ def test_release_metadata_retries_without_invalid_auth(
def test_runtime_setup_uses_pep440_package_version_and_codex_release_tags() -> None:
"""The SDK uses PEP 440 package pins and converts only when fetching releases."""
runtime_setup = _load_runtime_setup_module()
pyproject = tomllib.loads((ROOT / "pyproject.toml").read_text())
assert runtime_setup.PACKAGE_NAME == "openai-codex-cli-bin"
assert runtime_setup.pinned_runtime_version() == pyproject["project"]["version"]
assert (
f"{runtime_setup.PACKAGE_NAME}=={pyproject['project']['version']}"
in pyproject["project"]["dependencies"]
)
assert (
runtime_setup._normalized_package_version("rust-v0.116.0-alpha.1")
== "0.116.0a1"
@@ -352,6 +378,7 @@ def test_stage_runtime_release_can_pin_wheel_platform_tag(tmp_path: Path) -> Non
def test_stage_runtime_release_copies_resource_binaries(tmp_path: Path) -> None:
"""Runtime staging should copy every helper binary into the wheel bin dir."""
script = _load_update_script_module()
fake_binary = tmp_path / script.runtime_binary_name()
helper = tmp_path / "helper"
@@ -382,6 +409,7 @@ def test_stage_runtime_release_copies_resource_binaries(tmp_path: Path) -> None:
def test_runtime_resource_binaries_are_included_by_wheel_config(
tmp_path: Path,
) -> None:
"""The runtime wheel config should include helper binaries beside Codex."""
script = _load_update_script_module()
fake_binary = tmp_path / script.runtime_binary_name()
helper = tmp_path / "helper"
@@ -398,9 +426,7 @@ def test_runtime_resource_binaries_are_included_by_wheel_config(
pyproject = tomllib.loads((staged / "pyproject.toml").read_text())
assert {
"include": pyproject["tool"]["hatch"]["build"]["targets"]["wheel"]["include"],
"helper": (
staged / "src" / "codex_cli_bin" / "bin" / "helper"
).read_text(),
"helper": (staged / "src" / "codex_cli_bin" / "bin" / "helper").read_text(),
} == {
"include": ["src/codex_cli_bin/bin/**"],
"helper": "fake helper\n",
@@ -415,18 +441,18 @@ def test_stage_sdk_release_injects_exact_runtime_pin(tmp_path: Path) -> None:
)
pyproject = (staged / "pyproject.toml").read_text()
assert 'name = "openai-codex-app-server-sdk"' in pyproject
assert 'name = "openai-codex"' in pyproject
assert 'version = "0.116.0a1"' in pyproject
assert '"openai-codex-cli-bin==0.116.0a1"' in pyproject
assert (
'__version__ = "0.116.0a1"'
not in (staged / "src" / "codex_app_server" / "__init__.py").read_text()
not in (staged / "src" / "openai_codex" / "__init__.py").read_text()
)
assert (
'client_version: str = "0.116.0a1"'
not in (staged / "src" / "codex_app_server" / "client.py").read_text()
not in (staged / "src" / "openai_codex" / "client.py").read_text()
)
assert not any((staged / "src" / "codex_app_server").glob("bin/**"))
assert not any((staged / "src" / "openai_codex").glob("bin/**"))
def test_stage_sdk_release_replaces_existing_staging_dir(tmp_path: Path) -> None:
@@ -595,7 +621,7 @@ def test_stage_runtime_stages_binary_without_type_generation(tmp_path: Path) ->
def test_default_runtime_is_resolved_from_installed_runtime_package(
tmp_path: Path,
) -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
fake_binary = tmp_path / ("codex.exe" if client_module.os.name == "nt" else "codex")
fake_binary.write_text("")
@@ -610,7 +636,7 @@ def test_default_runtime_is_resolved_from_installed_runtime_package(
def test_explicit_codex_bin_override_takes_priority(tmp_path: Path) -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
explicit_binary = tmp_path / (
"custom-codex.exe" if client_module.os.name == "nt" else "custom-codex"
@@ -628,7 +654,7 @@ def test_explicit_codex_bin_override_takes_priority(tmp_path: Path) -> None:
def test_missing_runtime_package_requires_explicit_codex_bin() -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
ops = client_module.CodexBinResolverOps(
installed_codex_path=lambda: (_ for _ in ()).throw(
@@ -642,7 +668,7 @@ def test_missing_runtime_package_requires_explicit_codex_bin() -> None:
def test_broken_runtime_package_does_not_fall_back() -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
ops = client_module.CodexBinResolverOps(
installed_codex_path=lambda: (_ for _ in ()).throw(

View File

@@ -2,17 +2,26 @@ from __future__ import annotations
import asyncio
import time
from types import SimpleNamespace
from codex_app_server.async_client import AsyncAppServerClient
from openai_codex.async_client import AsyncAppServerClient
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
TurnCompletedNotification,
)
from openai_codex.models import Notification, UnknownNotification
def test_async_client_serializes_transport_calls() -> None:
def test_async_client_allows_concurrent_transport_calls() -> None:
"""Async wrappers should offload sync calls so concurrent awaits can overlap."""
async def scenario() -> int:
"""Run two blocking sync calls and report peak overlap."""
client = AsyncAppServerClient()
active = 0
max_active = 0
def fake_model_list(include_hidden: bool = False) -> bool:
"""Simulate a blocking sync transport call."""
nonlocal active, max_active
active += 1
max_active = max(max_active, active)
@@ -24,20 +33,24 @@ def test_async_client_serializes_transport_calls() -> None:
await asyncio.gather(client.model_list(), client.model_list())
return max_active
assert asyncio.run(scenario()) == 1
assert asyncio.run(scenario()) == 2
def test_async_stream_text_is_incremental_and_blocks_parallel_calls() -> None:
def test_async_stream_text_is_incremental_without_blocking_parallel_calls() -> None:
"""Async text streaming should yield incrementally without blocking other calls."""
async def scenario() -> tuple[str, list[str], bool]:
"""Start a stream, then prove another async client call can finish."""
client = AsyncAppServerClient()
def fake_stream_text(thread_id: str, text: str, params=None): # type: ignore[no-untyped-def]
"""Yield one item before sleeping so the async wrapper can interleave."""
yield "first"
time.sleep(0.03)
yield "second"
yield "third"
def fake_model_list(include_hidden: bool = False) -> str:
"""Return immediately to prove the event loop was not monopolized."""
return "done"
client._sync.stream_text = fake_stream_text # type: ignore[method-assign]
@@ -46,19 +59,167 @@ def test_async_stream_text_is_incremental_and_blocks_parallel_calls() -> None:
stream = client.stream_text("thread-1", "hello")
first = await anext(stream)
blocked_before_stream_done = False
competing_call = asyncio.create_task(client.model_list())
await asyncio.sleep(0.01)
blocked_before_stream_done = not competing_call.done()
competing_call_done_before_stream_done = competing_call.done()
remaining: list[str] = []
async for item in stream:
remaining.append(item)
await competing_call
return first, remaining, blocked_before_stream_done
return first, remaining, competing_call_done_before_stream_done
first, remaining, blocked = asyncio.run(scenario())
first, remaining, was_unblocked = asyncio.run(scenario())
assert first == "first"
assert remaining == ["second", "third"]
assert blocked
assert was_unblocked
def test_async_client_turn_notification_methods_delegate_to_sync_client() -> None:
"""Async turn routing methods should preserve sync-client registration semantics."""
async def scenario() -> tuple[list[tuple[str, str]], Notification, str]:
"""Record the sync-client calls made by async turn notification wrappers."""
client = AsyncAppServerClient()
event = Notification(
method="unknown/direct",
payload=UnknownNotification(params={"turnId": "turn-1"}),
)
completed = TurnCompletedNotification.model_validate(
{
"threadId": "thread-1",
"turn": {"id": "turn-1", "items": [], "status": "completed"},
}
)
calls: list[tuple[str, str]] = []
def fake_register(turn_id: str) -> None:
"""Record turn registration through the wrapped sync client."""
calls.append(("register", turn_id))
def fake_unregister(turn_id: str) -> None:
"""Record turn unregistration through the wrapped sync client."""
calls.append(("unregister", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return one routed notification through the wrapped sync client."""
calls.append(("next", turn_id))
return event
def fake_wait(turn_id: str) -> TurnCompletedNotification:
"""Return one completion through the wrapped sync client."""
calls.append(("wait", turn_id))
return completed
client._sync.register_turn_notifications = fake_register # type: ignore[method-assign]
client._sync.unregister_turn_notifications = fake_unregister # type: ignore[method-assign]
client._sync.next_turn_notification = fake_next # type: ignore[method-assign]
client._sync.wait_for_turn_completed = fake_wait # type: ignore[method-assign]
client.register_turn_notifications("turn-1")
next_event = await client.next_turn_notification("turn-1")
completed_event = await client.wait_for_turn_completed("turn-1")
client.unregister_turn_notifications("turn-1")
return calls, next_event, completed_event.turn.id
calls, next_event, completed_turn_id = asyncio.run(scenario())
assert (
calls,
next_event,
completed_turn_id,
) == (
[
("register", "turn-1"),
("next", "turn-1"),
("wait", "turn-1"),
("unregister", "turn-1"),
],
Notification(
method="unknown/direct",
payload=UnknownNotification(params={"turnId": "turn-1"}),
),
"turn-1",
)
def test_async_stream_text_uses_sync_turn_routing() -> None:
"""Async text streaming should consume the same per-turn routing path as sync."""
async def scenario() -> tuple[list[tuple[str, str]], list[str]]:
"""Record routing calls while streaming two deltas and one completion."""
client = AsyncAppServerClient()
notifications = [
Notification(
method="item/agentMessage/delta",
payload=AgentMessageDeltaNotification.model_validate(
{
"delta": "first",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
}
),
),
Notification(
method="item/agentMessage/delta",
payload=AgentMessageDeltaNotification.model_validate(
{
"delta": "second",
"itemId": "item-2",
"threadId": "thread-1",
"turnId": "turn-1",
}
),
),
Notification(
method="turn/completed",
payload=TurnCompletedNotification.model_validate(
{
"threadId": "thread-1",
"turn": {"id": "turn-1", "items": [], "status": "completed"},
}
),
),
]
calls: list[tuple[str, str]] = []
def fake_turn_start(thread_id: str, text: str, *, params=None): # type: ignore[no-untyped-def]
"""Return a started turn id while recording the request thread."""
calls.append(("turn_start", thread_id))
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
def fake_register(turn_id: str) -> None:
"""Record stream registration for the started turn."""
calls.append(("register", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return the next queued turn notification."""
calls.append(("next", turn_id))
return notifications.pop(0)
def fake_unregister(turn_id: str) -> None:
"""Record stream cleanup for the started turn."""
calls.append(("unregister", turn_id))
client._sync.turn_start = fake_turn_start # type: ignore[method-assign]
client._sync.register_turn_notifications = fake_register # type: ignore[method-assign]
client._sync.next_turn_notification = fake_next # type: ignore[method-assign]
client._sync.unregister_turn_notifications = fake_unregister # type: ignore[method-assign]
chunks = [chunk async for chunk in client.stream_text("thread-1", "hello")]
return calls, [chunk.delta for chunk in chunks]
calls, deltas = asyncio.run(scenario())
assert (calls, deltas) == (
[
("turn_start", "thread-1"),
("register", "turn-1"),
("next", "turn-1"),
("next", "turn-1"),
("next", "turn-1"),
("unregister", "turn-1"),
],
["first", "second"],
)

View File

@@ -3,14 +3,18 @@ from __future__ import annotations
from pathlib import Path
from typing import Any
from codex_app_server.client import AppServerClient, _params_dict
from codex_app_server.generated.v2_all import (
from openai_codex.client import AppServerClient, _params_dict
from openai_codex.generated.notification_registry import notification_turn_id
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
ApprovalsReviewer,
ThreadListParams,
ThreadResumeResponse,
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
WarningNotification,
)
from codex_app_server.models import UnknownNotification
from openai_codex.models import Notification, UnknownNotification
ROOT = Path(__file__).resolve().parents[1]
@@ -41,11 +45,12 @@ def test_generated_params_models_are_snake_case_and_dump_by_alias() -> None:
def test_generated_v2_bundle_has_single_shared_plan_type_definition() -> None:
source = (ROOT / "src" / "codex_app_server" / "generated" / "v2_all.py").read_text()
source = (ROOT / "src" / "openai_codex" / "generated" / "v2_all.py").read_text()
assert source.count("class PlanType(") == 1
def test_thread_resume_response_accepts_auto_review_reviewer() -> None:
"""Generated response models should keep accepting the auto review enum value."""
response = ThreadResumeResponse.model_validate(
{
"approvalPolicy": "on-request",
@@ -62,6 +67,8 @@ def test_thread_resume_response_accepts_auto_review_reviewer() -> None:
"id": "thread-1",
"modelProvider": "openai",
"preview": "",
# The pinned runtime schema requires the session id on threads.
"sessionId": "session-1",
"source": "cli",
"status": {"type": "idle"},
"turns": [],
@@ -128,3 +135,227 @@ def test_invalid_notification_payload_falls_back_to_unknown() -> None:
assert event.method == "thread/tokenUsage/updated"
assert isinstance(event.payload, UnknownNotification)
def test_generated_notification_turn_id_handles_known_payload_shapes() -> None:
"""Generated routing metadata should cover direct, nested, and unscoped payloads."""
direct = AgentMessageDeltaNotification.model_validate(
{
"delta": "hello",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
}
)
nested = TurnCompletedNotification.model_validate(
{
"threadId": "thread-1",
"turn": {"id": "turn-2", "items": [], "status": "completed"},
}
)
unscoped = WarningNotification(message="heads up")
assert [
notification_turn_id(direct),
notification_turn_id(nested),
notification_turn_id(unscoped),
] == ["turn-1", "turn-2", None]
def test_turn_notification_router_demuxes_registered_turns() -> None:
"""The router should deliver out-of-order turn events to the matching queues."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")
client._router.route_notification(
client._coerce_notification(
"item/agentMessage/delta",
{
"delta": "two",
"itemId": "item-2",
"threadId": "thread-1",
"turnId": "turn-2",
},
)
)
client._router.route_notification(
client._coerce_notification(
"item/agentMessage/delta",
{
"delta": "one",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
},
)
)
first = client.next_turn_notification("turn-1")
second = client.next_turn_notification("turn-2")
assert isinstance(first.payload, AgentMessageDeltaNotification)
assert isinstance(second.payload, AgentMessageDeltaNotification)
assert [
(first.method, first.payload.delta),
(second.method, second.payload.delta),
] == [
("item/agentMessage/delta", "one"),
("item/agentMessage/delta", "two"),
]
def test_client_reader_routes_interleaved_turn_notifications_by_turn_id() -> None:
"""Reader-loop routing should preserve order within each interleaved turn stream."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")
messages: list[dict[str, object]] = [
{
"method": "item/agentMessage/delta",
"params": {
"delta": "one-a",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
},
},
{
"method": "item/agentMessage/delta",
"params": {
"delta": "two-a",
"itemId": "item-2",
"threadId": "thread-1",
"turnId": "turn-2",
},
},
{
"method": "item/agentMessage/delta",
"params": {
"delta": "one-b",
"itemId": "item-3",
"threadId": "thread-1",
"turnId": "turn-1",
},
},
{
"method": "item/agentMessage/delta",
"params": {
"delta": "two-b",
"itemId": "item-4",
"threadId": "thread-1",
"turnId": "turn-2",
},
},
]
def fake_read_message() -> dict[str, object]:
"""Feed the reader loop a realistic interleaved stdout sequence."""
if messages:
return messages.pop(0)
raise EOFError
client._read_message = fake_read_message # type: ignore[method-assign]
client._reader_loop()
first_turn_events = [
client.next_turn_notification("turn-1"),
client.next_turn_notification("turn-1"),
]
second_turn_events = [
client.next_turn_notification("turn-2"),
client.next_turn_notification("turn-2"),
]
first_turn_deltas = [
event.payload.delta
for event in first_turn_events
if isinstance(event.payload, AgentMessageDeltaNotification)
]
second_turn_deltas = [
event.payload.delta
for event in second_turn_events
if isinstance(event.payload, AgentMessageDeltaNotification)
]
assert (first_turn_deltas, second_turn_deltas) == (
["one-a", "one-b"],
["two-a", "two-b"],
)
def test_turn_notification_router_buffers_events_before_registration() -> None:
"""Early turn events should be replayed once their TurnHandle registers."""
client = AppServerClient()
client._router.route_notification(
client._coerce_notification(
"item/agentMessage/delta",
{
"delta": "early",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
},
)
)
client.register_turn_notifications("turn-1")
event = client.next_turn_notification("turn-1")
assert isinstance(event.payload, AgentMessageDeltaNotification)
assert (event.method, event.payload.delta) == (
"item/agentMessage/delta",
"early",
)
def test_turn_notification_router_clears_unregistered_turn_when_completed() -> None:
"""A completed unregistered turn should not leave a pending queue behind."""
client = AppServerClient()
client._router.route_notification(
client._coerce_notification(
"item/agentMessage/delta",
{
"delta": "early",
"itemId": "item-1",
"threadId": "thread-1",
"turnId": "turn-1",
},
)
)
client._router.route_notification(
client._coerce_notification(
"turn/completed",
{
"threadId": "thread-1",
"turn": {"id": "turn-1", "items": [], "status": "completed"},
},
)
)
assert client._router._pending_turn_notifications == {}
def test_turn_notification_router_routes_unknown_turn_notifications() -> None:
"""Unknown notifications should still route when their raw params carry a turn id."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")
client._router.route_notification(
Notification(
method="unknown/direct",
payload=UnknownNotification(params={"turnId": "turn-1"}),
)
)
client._router.route_notification(
Notification(
method="unknown/nested",
payload=UnknownNotification(params={"turn": {"id": "turn-2"}}),
)
)
first = client.next_turn_notification("turn-1")
second = client.next_turn_notification("turn-2")
assert [first.method, second.method] == ["unknown/direct", "unknown/nested"]

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import importlib.metadata
import os
import subprocess
import sys
@@ -7,13 +8,14 @@ from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
GENERATED_TARGETS = [
Path("src/codex_app_server/generated/notification_registry.py"),
Path("src/codex_app_server/generated/v2_all.py"),
Path("src/codex_app_server/api.py"),
Path("src/openai_codex/generated/notification_registry.py"),
Path("src/openai_codex/generated/v2_all.py"),
Path("src/openai_codex/api.py"),
]
def _snapshot_target(root: Path, rel_path: Path) -> dict[str, bytes] | bytes | None:
"""Capture one generated artifact so regeneration drift is easy to compare."""
target = root / rel_path
if not target.exists():
return None
@@ -28,16 +30,22 @@ def _snapshot_target(root: Path, rel_path: Path) -> dict[str, bytes] | bytes | N
def _snapshot_targets(root: Path) -> dict[str, dict[str, bytes] | bytes | None]:
"""Capture all checked-in generated artifacts before and after regeneration."""
return {
str(rel_path): _snapshot_target(root, rel_path) for rel_path in GENERATED_TARGETS
str(rel_path): _snapshot_target(root, rel_path)
for rel_path in GENERATED_TARGETS
}
def test_generated_files_are_up_to_date():
"""Regenerating from the pinned runtime package should leave artifacts unchanged."""
before = _snapshot_targets(ROOT)
# Regenerate contract artifacts via single maintenance entrypoint.
# Regenerate contract artifacts via the pinned runtime package, not a local
# app-server binary from the checkout or CI environment.
assert importlib.metadata.version("openai-codex-cli-bin") == "0.131.0a4"
env = os.environ.copy()
env.pop("CODEX_EXEC_PATH", None)
python_bin = str(Path(sys.executable).parent)
env["PATH"] = f"{python_bin}{os.pathsep}{env.get('PATH', '')}"

View File

@@ -4,21 +4,24 @@ import asyncio
from collections import deque
from pathlib import Path
from types import SimpleNamespace
from typing import Any
import pytest
import codex_app_server.api as public_api_module
from codex_app_server.client import AppServerClient
from codex_app_server.generated.v2_all import (
import openai_codex.api as public_api_module
from openai_codex.client import AppServerClient
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
ItemCompletedNotification,
MessagePhase,
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
TurnStartParams,
TurnStatus,
)
from codex_app_server.models import InitializeResponse, Notification
from codex_app_server.api import (
from openai_codex.models import InitializeResponse, Notification
from openai_codex.api import (
ApprovalMode,
AsyncCodex,
AsyncThread,
AsyncTurnHandle,
@@ -31,6 +34,22 @@ from codex_app_server.api import (
ROOT = Path(__file__).resolve().parents[1]
def _approval_settings(params: list[Any]) -> list[dict[str, object]]:
"""Return serialized approval settings from captured Pydantic params."""
return [
{
key: value
for key, value in param.model_dump(
by_alias=True,
exclude_none=True,
mode="json",
).items()
if key in {"approvalPolicy", "approvalsReviewer"}
}
for param in params
]
def _delta_notification(
*,
thread_id: str = "thread-1",
@@ -82,6 +101,7 @@ def _item_completed_notification(
text: str = "final text",
phase: MessagePhase | None = None,
) -> Notification:
"""Build a realistic completed-item notification accepted by generated models."""
item: dict[str, object] = {
"id": "item-1",
"text": text,
@@ -93,6 +113,8 @@ def _item_completed_notification(
method="item/completed",
payload=ItemCompletedNotification.model_validate(
{
# The pinned runtime schema requires completion timestamps.
"completedAtMs": 1,
"item": item,
"threadId": thread_id,
"turnId": turn_id,
@@ -226,54 +248,224 @@ def test_async_codex_initializes_only_once_under_concurrency() -> None:
asyncio.run(scenario())
def test_turn_stream_rejects_second_active_consumer() -> None:
client = AppServerClient()
notifications: deque[Notification] = deque(
[
_delta_notification(turn_id="turn-1"),
_completed_notification(turn_id="turn-1"),
]
def _approval_mode_turn_params(approval_mode: ApprovalMode) -> TurnStartParams:
"""Build real generated turn params from one public approval mode."""
approval_policy, approvals_reviewer = public_api_module._approval_mode_settings(
approval_mode
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
return TurnStartParams(
thread_id="thread-1",
input=[],
approval_policy=approval_policy,
approvals_reviewer=approvals_reviewer,
)
class CapturingApprovalClient:
"""Collect wrapper params at the app-server client boundary."""
def __init__(self) -> None:
self.params: list[Any] = []
def thread_start(self, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-1"))
def thread_resume(self, thread_id: str, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id=thread_id))
def thread_fork(self, thread_id: str, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id=f"{thread_id}-fork"))
def turn_start(
self,
thread_id: str,
input: object, # noqa: A002
*,
params: Any,
) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(turn=SimpleNamespace(id=f"{thread_id}-turn"))
class CapturingAsyncApprovalClient:
"""Async mirror of CapturingApprovalClient for public async wrappers."""
def __init__(self) -> None:
self.params: list[Any] = []
async def thread_start(self, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-1"))
async def thread_resume(self, thread_id: str, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id=thread_id))
async def thread_fork(self, thread_id: str, params: Any) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(thread=SimpleNamespace(id=f"{thread_id}-fork"))
async def turn_start(
self,
thread_id: str,
input: object, # noqa: A002
*,
params: Any,
) -> SimpleNamespace:
self.params.append(params)
return SimpleNamespace(turn=SimpleNamespace(id=f"{thread_id}-turn"))
def test_approval_modes_serialize_to_expected_start_params() -> None:
"""ApprovalMode should map to the app-server params sent for new work."""
assert {
mode.value: _approval_settings([_approval_mode_turn_params(mode)])[0]
for mode in ApprovalMode
} == {
"deny_all": {"approvalPolicy": "never"},
"auto_review": {
"approvalPolicy": "on-request",
"approvalsReviewer": "auto_review",
},
}
def test_unknown_approval_mode_is_rejected() -> None:
"""Invalid approval modes should fail before params are constructed."""
with pytest.raises(ValueError, match="deny_all, auto_review"):
public_api_module._approval_mode_settings("allow_all") # type: ignore[arg-type]
def test_approval_defaults_preserve_existing_sync_thread_settings() -> None:
"""Only thread creation should write approval defaults unless callers override."""
client = CapturingApprovalClient()
codex = Codex.__new__(Codex)
codex._client = client
started = codex.thread_start(approval_mode=ApprovalMode.deny_all)
started.turn([])
codex.thread_resume("existing-thread")
codex.thread_fork("existing-thread")
started.turn([], approval_mode=ApprovalMode.auto_review)
assert _approval_settings(client.params) == [
{"approvalPolicy": "never"},
{},
{},
{},
{
"approvalPolicy": "on-request",
"approvalsReviewer": "auto_review",
},
]
def test_approval_defaults_preserve_existing_async_thread_settings() -> None:
"""Async wrappers should follow the same approval override semantics."""
async def scenario() -> None:
client = CapturingAsyncApprovalClient()
codex = AsyncCodex()
codex._client = client # type: ignore[assignment]
codex._initialized = True
started = await codex.thread_start(approval_mode=ApprovalMode.deny_all)
await started.turn([])
await codex.thread_resume("existing-thread")
await codex.thread_fork("existing-thread")
await started.turn([], approval_mode=ApprovalMode.auto_review)
assert _approval_settings(client.params) == [
{"approvalPolicy": "never"},
{},
{},
{},
{
"approvalPolicy": "on-request",
"approvalsReviewer": "auto_review",
},
]
asyncio.run(scenario())
def test_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two sync TurnHandle streams should advance independently on one client."""
client = AppServerClient()
notifications: dict[str, deque[Notification]] = {
"turn-1": deque(
[
_delta_notification(turn_id="turn-1", text="one"),
_completed_notification(turn_id="turn-1"),
]
),
"turn-2": deque(
[
_delta_notification(turn_id="turn-2", text="two"),
_completed_notification(turn_id="turn-2"),
]
),
}
client.next_turn_notification = lambda turn_id: notifications[turn_id].popleft() # type: ignore[method-assign]
first_stream = TurnHandle(client, "thread-1", "turn-1").stream()
assert next(first_stream).method == "item/agentMessage/delta"
second_stream = TurnHandle(client, "thread-1", "turn-2").stream()
with pytest.raises(RuntimeError, match="Concurrent turn consumers are not yet supported"):
next(second_stream)
assert next(second_stream).method == "item/agentMessage/delta"
assert next(first_stream).method == "turn/completed"
assert next(second_stream).method == "turn/completed"
first_stream.close()
second_stream.close()
def test_async_turn_stream_rejects_second_active_consumer() -> None:
def test_async_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two async TurnHandle streams should advance independently on one client."""
async def scenario() -> None:
"""Interleave two async streams backed by separate per-turn queues."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this stream test."""
return None
notifications: deque[Notification] = deque(
[
_delta_notification(turn_id="turn-1"),
_completed_notification(turn_id="turn-1"),
]
)
notifications: dict[str, deque[Notification]] = {
"turn-1": deque(
[
_delta_notification(turn_id="turn-1", text="one"),
_completed_notification(turn_id="turn-1"),
]
),
"turn-2": deque(
[
_delta_notification(turn_id="turn-2", text="two"),
_completed_notification(turn_id="turn-2"),
]
),
}
async def fake_next_notification() -> Notification:
return notifications.popleft()
async def fake_next_notification(turn_id: str) -> Notification:
"""Return the next notification from the requested per-turn queue."""
return notifications[turn_id].popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
codex._client.next_notification = fake_next_notification # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
first_stream = AsyncTurnHandle(codex, "thread-1", "turn-1").stream()
assert (await anext(first_stream)).method == "item/agentMessage/delta"
second_stream = AsyncTurnHandle(codex, "thread-1", "turn-2").stream()
with pytest.raises(RuntimeError, match="Concurrent turn consumers are not yet supported"):
await anext(second_stream)
assert (await anext(second_stream)).method == "item/agentMessage/delta"
assert (await anext(first_stream)).method == "turn/completed"
assert (await anext(second_stream)).method == "turn/completed"
await first_stream.aclose()
await second_stream.aclose()
asyncio.run(scenario())
@@ -285,7 +477,7 @@ def test_turn_run_returns_completed_turn_payload() -> None:
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
result = TurnHandle(client, "thread-1", "turn-1").run()
@@ -295,6 +487,7 @@ def test_turn_run_returns_completed_turn_payload() -> None:
def test_thread_run_accepts_string_input_and_returns_run_result() -> None:
"""Sync Thread.run should preserve approval settings unless explicitly overridden."""
client = AppServerClient()
item_notification = _item_completed_notification(text="Hello.")
usage_notification = _token_usage_notification()
@@ -305,7 +498,7 @@ def test_thread_run_accepts_string_input_and_returns_run_result() -> None:
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
seen: dict[str, object] = {}
def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202
@@ -318,12 +511,20 @@ def test_thread_run_accepts_string_input_and_returns_run_result() -> None:
result = Thread(client, "thread-1").run("hello")
assert seen["thread_id"] == "thread-1"
assert seen["wire_input"] == [{"type": "text", "text": "hello"}]
assert result == RunResult(
final_response="Hello.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
assert (
seen["thread_id"],
seen["wire_input"],
_approval_settings([seen["params"]]),
result,
) == (
"thread-1",
[{"type": "text", "text": "hello"}],
[{}],
RunResult(
final_response="Hello.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
),
)
@@ -338,7 +539,7 @@ def test_thread_run_uses_last_completed_assistant_message_as_final_response() ->
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
client.turn_start = lambda thread_id, wire_input, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
@@ -363,7 +564,7 @@ def test_thread_run_preserves_empty_last_assistant_message() -> None:
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
client.turn_start = lambda thread_id, wire_input, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
@@ -394,7 +595,7 @@ def test_thread_run_prefers_explicit_final_answer_over_later_commentary() -> Non
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
client.turn_start = lambda thread_id, wire_input, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
@@ -420,7 +621,7 @@ def test_thread_run_returns_none_when_only_commentary_messages_complete() -> Non
_completed_notification(),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
client.turn_start = lambda thread_id, wire_input, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
@@ -438,7 +639,7 @@ def test_thread_run_raises_on_failed_turn() -> None:
_completed_notification(status="failed", error_message="boom"),
]
)
client.next_notification = notifications.popleft # type: ignore[method-assign]
client.next_turn_notification = lambda _turn_id: notifications.popleft() # type: ignore[method-assign]
client.turn_start = lambda thread_id, wire_input, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
@@ -447,11 +648,61 @@ def test_thread_run_raises_on_failed_turn() -> None:
Thread(client, "thread-1").run("hello")
def test_stream_text_registers_and_consumes_turn_notifications() -> None:
"""stream_text should register, consume, and unregister one turn queue."""
client = AppServerClient()
notifications: deque[Notification] = deque(
[
_delta_notification(text="first"),
_delta_notification(text="second"),
_completed_notification(),
]
)
calls: list[tuple[str, str]] = []
client.turn_start = lambda thread_id, input_items, *, params=None: SimpleNamespace( # noqa: ARG005,E731
turn=SimpleNamespace(id="turn-1")
)
def fake_register(turn_id: str) -> None:
"""Record registration for the turn created by stream_text."""
calls.append(("register", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return the next queued notification for stream_text."""
calls.append(("next", turn_id))
return notifications.popleft()
def fake_unregister(turn_id: str) -> None:
"""Record cleanup for the turn created by stream_text."""
calls.append(("unregister", turn_id))
client.register_turn_notifications = fake_register # type: ignore[method-assign]
client.next_turn_notification = fake_next # type: ignore[method-assign]
client.unregister_turn_notifications = fake_unregister # type: ignore[method-assign]
chunks = list(client.stream_text("thread-1", "hello"))
assert ([chunk.delta for chunk in chunks], calls) == (
["first", "second"],
[
("register", "turn-1"),
("next", "turn-1"),
("next", "turn-1"),
("next", "turn-1"),
("unregister", "turn-1"),
],
)
def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
"""Async Thread.run should preserve approvals while collecting routed results."""
async def scenario() -> None:
"""Feed item, usage, and completion events through the async turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
item_notification = _item_completed_notification(text="Hello async.")
@@ -466,40 +717,60 @@ def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
seen: dict[str, object] = {}
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202
"""Capture normalized input and return a synthetic turn id."""
seen["thread_id"] = thread_id
seen["wire_input"] = wire_input
seen["params"] = params
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification() -> Notification:
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued notification for the synthetic turn."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
codex._client.turn_start = fake_turn_start # type: ignore[method-assign]
codex._client.next_notification = fake_next_notification # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
result = await AsyncThread(codex, "thread-1").run("hello")
assert seen["thread_id"] == "thread-1"
assert seen["wire_input"] == [{"type": "text", "text": "hello"}]
assert result == RunResult(
final_response="Hello async.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
assert (
seen["thread_id"],
seen["wire_input"],
_approval_settings([seen["params"]]),
result,
) == (
"thread-1",
[{"type": "text", "text": "hello"}],
[{}],
RunResult(
final_response="Hello async.",
items=[item_notification.payload.item],
usage=usage_notification.payload.token_usage,
),
)
asyncio.run(scenario())
def test_async_thread_run_uses_last_completed_assistant_message_as_final_response() -> None:
def test_async_thread_run_uses_last_completed_assistant_message_as_final_response() -> (
None
):
"""Async run should use the last final assistant message as the response text."""
async def scenario() -> None:
"""Feed two completed agent messages through the async per-turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
first_item_notification = _item_completed_notification(text="First async message")
second_item_notification = _item_completed_notification(text="Second async message")
first_item_notification = _item_completed_notification(
text="First async message"
)
second_item_notification = _item_completed_notification(
text="Second async message"
)
notifications: deque[Notification] = deque(
[
first_item_notification,
@@ -509,14 +780,16 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
)
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202,ARG001
"""Return a synthetic turn id after AsyncThread.run builds input."""
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification() -> Notification:
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued notification for that synthetic turn."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
codex._client.turn_start = fake_turn_start # type: ignore[method-assign]
codex._client.next_notification = fake_next_notification # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
result = await AsyncThread(codex, "thread-1").run("hello")
@@ -530,10 +803,14 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
def test_async_thread_run_returns_none_when_only_commentary_messages_complete() -> None:
"""Async Thread.run should ignore commentary-only messages for final text."""
async def scenario() -> None:
"""Feed a commentary item and completion through the async turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
commentary_notification = _item_completed_notification(
@@ -548,14 +825,16 @@ def test_async_thread_run_returns_none_when_only_commentary_messages_complete()
)
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202,ARG001
"""Return a synthetic turn id for commentary-only output."""
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification() -> Notification:
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued commentary/completion notification."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
codex._client.turn_start = fake_turn_start # type: ignore[method-assign]
codex._client.next_notification = fake_next_notification # type: ignore[method-assign]
codex._client.next_turn_notification = fake_next_notification # type: ignore[method-assign]
result = await AsyncThread(codex, "thread-1").run("hello")

View File

@@ -6,13 +6,89 @@ import tomllib
from pathlib import Path
from typing import Any
import codex_app_server
from codex_app_server import AppServerConfig, RunResult
from codex_app_server.models import InitializeResponse
from codex_app_server.api import AsyncCodex, AsyncThread, Codex, Thread
import openai_codex
import openai_codex.types as public_types
from openai_codex import (
AppServerConfig,
ApprovalMode,
AsyncCodex,
AsyncThread,
Codex,
RunResult,
Thread,
)
from openai_codex.types import InitializeResponse
EXPECTED_ROOT_EXPORTS = [
"__version__",
"AppServerConfig",
"Codex",
"AsyncCodex",
"ApprovalMode",
"Thread",
"AsyncThread",
"TurnHandle",
"AsyncTurnHandle",
"RunResult",
"Input",
"InputItem",
"TextInput",
"ImageInput",
"LocalImageInput",
"SkillInput",
"MentionInput",
"retry_on_overload",
"AppServerError",
"TransportClosedError",
"JsonRpcError",
"AppServerRpcError",
"ParseError",
"InvalidRequestError",
"MethodNotFoundError",
"InvalidParamsError",
"InternalRpcError",
"ServerBusyError",
"RetryLimitExceededError",
"is_retryable_error",
]
EXPECTED_TYPES_EXPORTS = [
"ApprovalsReviewer",
"AskForApproval",
"InitializeResponse",
"JsonObject",
"ModelListResponse",
"Notification",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"SortDirection",
"ThreadArchiveResponse",
"ThreadCompactStartResponse",
"ThreadItem",
"ThreadListCwdFilter",
"ThreadListResponse",
"ThreadReadResponse",
"ThreadSetNameResponse",
"ThreadSortKey",
"ThreadSource",
"ThreadSourceKind",
"ThreadStartSource",
"ThreadTokenUsage",
"ThreadTokenUsageUpdatedNotification",
"Turn",
"TurnCompletedNotification",
"TurnInterruptResponse",
"TurnStatus",
"TurnSteerResponse",
]
def _keyword_only_names(fn: object) -> list[str]:
"""Return only user-facing keyword-only parameter names for a public method."""
signature = inspect.signature(fn)
return [
param.name
@@ -21,7 +97,13 @@ def _keyword_only_names(fn: object) -> list[str]:
]
def _keyword_default(fn: object, name: str) -> object:
"""Return the default value for one keyword parameter on a public method."""
return inspect.signature(fn).parameters[name].default
def _assert_no_any_annotations(fn: object) -> None:
"""Reject loose annotations on public wrapper methods."""
signature = inspect.signature(fn)
for param in signature.parameters.values():
if param.annotation is Any:
@@ -33,31 +115,115 @@ def _assert_no_any_annotations(fn: object) -> None:
def test_root_exports_app_server_config() -> None:
"""The root package should expose the process configuration object."""
assert AppServerConfig.__name__ == "AppServerConfig"
def test_root_exports_run_result() -> None:
"""The root package should expose the common-case run result wrapper."""
assert RunResult.__name__ == "RunResult"
def test_root_exports_approval_mode() -> None:
"""The root package should expose the high-level approval mode enum."""
assert [(mode.name, mode.value) for mode in ApprovalMode] == [
("deny_all", "deny_all"),
("auto_review", "auto_review"),
]
def test_package_and_default_client_versions_follow_project_version() -> None:
"""The importable package version should stay aligned with pyproject metadata."""
pyproject_path = Path(__file__).resolve().parents[1] / "pyproject.toml"
pyproject = tomllib.loads(pyproject_path.read_text())
assert codex_app_server.__version__ == pyproject["project"]["version"]
assert AppServerConfig().client_version == codex_app_server.__version__
assert openai_codex.__version__ == pyproject["project"]["version"]
assert AppServerConfig().client_version == openai_codex.__version__
def test_package_includes_py_typed_marker() -> None:
marker = resources.files("codex_app_server").joinpath("py.typed")
"""The wheel should advertise that inline type information is available."""
marker = resources.files("openai_codex").joinpath("py.typed")
assert marker.is_file()
def test_package_root_exports_only_public_api() -> None:
"""The package root should expose the supported SDK surface, not internals."""
assert openai_codex.__all__ == EXPECTED_ROOT_EXPORTS
assert {name: hasattr(openai_codex, name) for name in EXPECTED_ROOT_EXPORTS} == {
name: True for name in EXPECTED_ROOT_EXPORTS
}
assert {
"AppServerClient": hasattr(openai_codex, "AppServerClient"),
"AsyncAppServerClient": hasattr(openai_codex, "AsyncAppServerClient"),
"InitializeResponse": hasattr(openai_codex, "InitializeResponse"),
"ThreadStartParams": hasattr(openai_codex, "ThreadStartParams"),
"TurnStartParams": hasattr(openai_codex, "TurnStartParams"),
"TurnCompletedNotification": hasattr(openai_codex, "TurnCompletedNotification"),
"TurnStatus": hasattr(openai_codex, "TurnStatus"),
} == {
"AppServerClient": False,
"AsyncAppServerClient": False,
"InitializeResponse": False,
"ThreadStartParams": False,
"TurnStartParams": False,
"TurnCompletedNotification": False,
"TurnStatus": False,
}
def test_package_star_import_matches_public_api() -> None:
"""Star imports should follow the same explicit public API list."""
namespace: dict[str, object] = {}
exec("from openai_codex import *", namespace)
exported = set(namespace) - {"__builtins__"}
assert exported == set(EXPECTED_ROOT_EXPORTS)
def test_types_module_exports_curated_public_types() -> None:
"""The public type module should be the supported place for app-server models."""
assert public_types.__all__ == EXPECTED_TYPES_EXPORTS
assert {name: hasattr(public_types, name) for name in EXPECTED_TYPES_EXPORTS} == {
name: True for name in EXPECTED_TYPES_EXPORTS
}
def test_types_star_import_matches_public_types() -> None:
"""Star imports from the type module should match its explicit export list."""
namespace: dict[str, object] = {}
exec("from openai_codex.types import *", namespace)
exported = set(namespace) - {"__builtins__"}
assert exported == set(EXPECTED_TYPES_EXPORTS)
def test_examples_use_public_import_surfaces() -> None:
"""Examples should teach users the public root and type-module imports only."""
examples_root = Path(__file__).resolve().parents[1] / "examples"
private_import_markers = [
"openai_codex.api",
"openai_codex.client",
"openai_codex.generated",
"openai_codex.models",
"openai_codex.retry",
]
offenders = {
str(path.relative_to(examples_root)): marker
for path in examples_root.rglob("*.py")
for marker in private_import_markers
if marker in path.read_text()
}
assert offenders == {}
def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"""Generated convenience methods should expose typed Pythonic keyword names."""
expected = {
Codex.thread_start: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -70,6 +236,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_name",
"service_tier",
"session_start_source",
"thread_source",
],
Codex.thread_list: [
"archived",
@@ -84,8 +251,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"use_state_db_only",
],
Codex.thread_resume: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -97,8 +263,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_tier",
],
Codex.thread_fork: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -108,10 +273,10 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"model_provider",
"sandbox",
"service_tier",
"thread_source",
],
Thread.turn: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -122,8 +287,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
Thread.run: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -134,8 +298,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
AsyncCodex.thread_start: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -148,6 +311,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_name",
"service_tier",
"session_start_source",
"thread_source",
],
AsyncCodex.thread_list: [
"archived",
@@ -162,8 +326,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"use_state_db_only",
],
AsyncCodex.thread_resume: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -175,8 +338,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_tier",
],
AsyncCodex.thread_fork: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"base_instructions",
"config",
"cwd",
@@ -186,10 +348,10 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"model_provider",
"sandbox",
"service_tier",
"thread_source",
],
AsyncThread.turn: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -200,8 +362,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"summary",
],
AsyncThread.run: [
"approval_policy",
"approvals_reviewer",
"approval_mode",
"cwd",
"effort",
"model",
@@ -222,7 +383,38 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
_assert_no_any_annotations(fn)
def test_new_thread_methods_default_to_auto_review() -> None:
"""New threads should start with auto-review unless callers opt out."""
funcs = [
Codex.thread_start,
AsyncCodex.thread_start,
]
assert {fn: _keyword_default(fn, "approval_mode") for fn in funcs} == {
fn: ApprovalMode.auto_review for fn in funcs
}
def test_existing_thread_methods_default_to_preserving_approval_settings() -> None:
"""Existing thread operations should not serialize approval overrides by default."""
funcs = [
Codex.thread_resume,
Codex.thread_fork,
Thread.turn,
Thread.run,
AsyncCodex.thread_resume,
AsyncCodex.thread_fork,
AsyncThread.turn,
AsyncThread.run,
]
assert {fn: _keyword_default(fn, "approval_mode") for fn in funcs} == {
fn: None for fn in funcs
}
def test_lifecycle_methods_are_codex_scoped() -> None:
"""Lifecycle operations should hang off the client rather than thread objects."""
assert hasattr(Codex, "thread_resume")
assert hasattr(Codex, "thread_fork")
assert hasattr(Codex, "thread_archive")
@@ -253,6 +445,7 @@ def test_lifecycle_methods_are_codex_scoped() -> None:
def test_initialize_metadata_parses_user_agent_shape() -> None:
"""Initialize metadata should accept the legacy user-agent-only payload shape."""
payload = InitializeResponse.model_validate({"userAgent": "codex-cli/1.2.3"})
parsed = Codex._validate_initialize(payload)
assert parsed is payload
@@ -263,6 +456,7 @@ def test_initialize_metadata_parses_user_agent_shape() -> None:
def test_initialize_metadata_requires_non_empty_information() -> None:
"""Initialize metadata should fail when the runtime gives no identity signal."""
try:
Codex._validate_initialize(InitializeResponse.model_validate({}))
except RuntimeError as exc:

View File

@@ -205,7 +205,7 @@ def test_real_initialize_and_model_list(runtime_env: PreparedRuntimeEnv) -> None
textwrap.dedent(
"""
import json
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
models = codex.models(include_hidden=True)
@@ -234,7 +234,7 @@ def test_real_thread_and_turn_start_smoke(runtime_env: PreparedRuntimeEnv) -> No
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(
@@ -271,7 +271,7 @@ def test_real_thread_run_convenience_smoke(runtime_env: PreparedRuntimeEnv) -> N
textwrap.dedent(
"""
import json
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(
@@ -304,7 +304,7 @@ def test_real_async_thread_turn_usage_and_ids_smoke(
"""
import asyncio
import json
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main():
async with AsyncCodex() as codex:
@@ -347,7 +347,7 @@ def test_real_async_thread_run_convenience_smoke(
"""
import asyncio
import json
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main():
async with AsyncCodex() as codex:
@@ -436,7 +436,7 @@ def test_real_streaming_smoke_turn_completed(runtime_env: PreparedRuntimeEnv) ->
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(
@@ -469,7 +469,7 @@ def test_real_turn_interrupt_smoke(runtime_env: PreparedRuntimeEnv) -> None:
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(

24
sdk/python/uv.lock generated
View File

@@ -3,9 +3,12 @@ revision = 3
requires-python = ">=3.10"
[options]
exclude-newer = "2026-04-20T18:19:27.620299Z"
exclude-newer = "2026-05-02T06:28:46.47929Z"
exclude-newer-span = "P7D"
[options.exclude-newer-package]
openai-codex-cli-bin = "2026-05-10T00:00:00Z"
[[package]]
name = "annotated-types"
version = "0.7.0"
@@ -278,10 +281,11 @@ wheels = [
]
[[package]]
name = "openai-codex-app-server-sdk"
version = "0.116.0a1"
name = "openai-codex"
version = "0.131.0a4"
source = { editable = "." }
dependencies = [
{ name = "openai-codex-cli-bin" },
{ name = "pydantic" },
]
@@ -295,12 +299,26 @@ dev = [
[package.metadata]
requires-dist = [
{ name = "datamodel-code-generator", marker = "extra == 'dev'", specifier = "==0.31.2" },
{ name = "openai-codex-cli-bin", specifier = "==0.131.0a4" },
{ name = "pydantic", specifier = ">=2.12" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0" },
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.11" },
]
provides-extras = ["dev"]
[[package]]
name = "openai-codex-cli-bin"
version = "0.131.0a4"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6b/9f/f9fc4bb1b2b7a20d4d65143ebb4c4dcd2301a718183b539ecb5b1c0ac3ec/openai_codex_cli_bin-0.131.0a4-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:db0f3cb7dda310641ac04fbaf3f128693a3817ab83ae59b67a3c9c74bd53f8b8", size = 88367585, upload-time = "2026-05-09T06:14:09.453Z" },
{ url = "https://files.pythonhosted.org/packages/dc/39/eb95ed0e8156669e895a192dec760be07dabe891c3c6340f7c6487b9a976/openai_codex_cli_bin-0.131.0a4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6cae5af6edca7f6d3f0bcbbd93cfc8a6dc3e33fb5955af21ae492b6d5d0dcb72", size = 79245567, upload-time = "2026-05-09T06:14:13.581Z" },
{ url = "https://files.pythonhosted.org/packages/0c/92/ade176fa78d746d5ff7a6e371d64740c0d95ab299b0dd58a5404b89b3915/openai_codex_cli_bin-0.131.0a4-py3-none-musllinux_1_1_aarch64.whl", hash = "sha256:5728f9887baf62d7e72f4f242093b3ff81e26c81d80d346fe1eef7eda6838aa8", size = 77758628, upload-time = "2026-05-09T06:14:18.374Z" },
{ url = "https://files.pythonhosted.org/packages/28/e6/bfe6c65f8e3e5499f71b24c3b6e8d07e4d426543d25e429b9b141b544e5f/openai_codex_cli_bin-0.131.0a4-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:d7a47fd3667fbcc216593839c202deffa056e9b3d46c6933e72594d461f4fea0", size = 84535509, upload-time = "2026-05-09T06:14:22.851Z" },
{ url = "https://files.pythonhosted.org/packages/bd/b7/53dc094a691ab6f2ca079e8e865b122843809ac4fad51cac4d59021e599d/openai_codex_cli_bin-0.131.0a4-py3-none-win_amd64.whl", hash = "sha256:c61bcf029672494c4c7fdc8567dbaa659a48bb75641d91c2ade27c1e46803434", size = 88185543, upload-time = "2026-05-09T06:14:27.282Z" },
{ url = "https://files.pythonhosted.org/packages/82/99/e0852ffcf9b4d2794fef83e0c3a267b3c773a776f136e9f7ce19f0c8df42/openai_codex_cli_bin-0.131.0a4-py3-none-win_arm64.whl", hash = "sha256:bbde750186861f102e346ac066f4e9608f515f7b71b16a6e8b7ef1ddc02a97a5", size = 81196380, upload-time = "2026-05-09T06:14:32.103Z" },
]
[[package]]
name = "packaging"
version = "26.1"