Compare commits

...

21 Commits

Author SHA1 Message Date
Ahmed Ibrahim
7edbdc555c Add approval callback TODO
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 13:49:35 +03:00
Ahmed Ibrahim
d80a43263f Default Python SDK approval policy to never
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 12:03:09 +03:00
Ahmed Ibrahim
78c0d5ca3d Rename Python SDK package to openai-codex
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 11:38:22 +03:00
Ahmed Ibrahim
9306e60848 Define Python SDK public type surface
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 11:34:46 +03:00
Ahmed Ibrahim
8d7a5c27c1 Keep Python SDK type exports
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:39:52 +03:00
Ahmed Ibrahim
692c08faf9 Narrow Python SDK root exports
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:35:44 +03:00
Ahmed Ibrahim
8b8e868140 Document Python SDK CI job
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:29 +03:00
Ahmed Ibrahim
2654cc299e Run Python SDK tests in CI
Add a separate Python SDK runner that installs the pinned musl runtime wheel in an Alpine Python container and runs the SDK pytest suite in parallel with existing SDK checks.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:17 +03:00
Ahmed Ibrahim
242ca6d8fd Document pinned schema generation helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:24:00 +03:00
Ahmed Ibrahim
b7635f4d77 Generate Python SDK types from pinned runtime
Make the SDK artifact generator fetch schema from the pinned runtime package, regenerate the checked-in Python types from that schema, and assert generated artifacts stay up to date.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:41 +03:00
Ahmed Ibrahim
c24694bdb0 Document Python runtime pinning helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:11 +03:00
Ahmed Ibrahim
6e10973c78 Pin Python SDK runtime dependency
Make the Python SDK declare its published runtime package dependency directly and resolve the runtime version from that pin instead of inferring it from the SDK package version.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:11 +03:00
Ahmed Ibrahim
becbd2a127 Document SDK turn routing helpers
Co-authored-by: Codex <noreply@openai.com>
2026-05-09 10:23:06 +03:00
Ahmed Ibrahim
11e31d7d38 Fix Python runtime wheel release args
Build the stage-runtime command as a single non-empty Bash array and append Linux resource binaries conditionally so macOS runners do not expand an empty optional array under set -u.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:03 +03:00
Ahmed Ibrahim
1d0023776f Build Python runtime wheels in virtualenvs
Avoid installing build into runner-managed Python environments when release jobs build runtime wheels.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:03 +03:00
Ahmed Ibrahim
9b54951688 Make Python runtime publish non-blocking
Allow the Rust release workflow to finish even if the new Python runtime PyPI publish job needs follow-up.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
3a3e1b477c Pin PyPI publish action to release tag commit
Use the v1.13.0 commit for the PyPI publish action so the pinned action reference has a clear release version.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
356c6797b8 Use PyPI environment for runtime publishing
Set the Python runtime publish job environment to match the PyPI trusted publisher configuration.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
bd14ac4758 Bundle Linux bwrap in Python runtime wheels
Pass the release bwrap binary into Linux runtime wheel staging so PyPI installs preserve sandbox fallback behavior.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
d764740e6f Explain Windows runtime wheel helper packaging
Document why the release workflow includes sandbox helper executables in Windows Python runtime wheels.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
Ahmed Ibrahim
29e1c96f72 Publish Python runtime wheels on release
Build platform-specific openai-codex-cli-bin wheels from signed release binaries and publish them to PyPI using trusted publishing.

Co-authored-by: Codex <noreply@openai.com>
2026-05-09 09:24:02 +03:00
66 changed files with 1714 additions and 726 deletions

View File

@@ -220,6 +220,48 @@ jobs:
"$dest/${binary}-${{ matrix.target }}.exe"
done
- name: Build Python runtime wheel
shell: bash
run: |
set -euo pipefail
case "${{ matrix.target }}" in
aarch64-pc-windows-msvc)
platform_tag="win_arm64"
;;
x86_64-pc-windows-msvc)
platform_tag="win_amd64"
;;
*)
echo "No Python runtime wheel platform tag for ${{ matrix.target }}"
exit 1
;;
esac
python -m venv "${RUNNER_TEMP}/python-runtime-build-venv"
"${RUNNER_TEMP}/python-runtime-build-venv/Scripts/python.exe" -m pip install build
stage_dir="${RUNNER_TEMP}/openai-codex-cli-bin-${{ matrix.target }}"
wheel_dir="${GITHUB_WORKSPACE}/python-runtime-dist/${{ matrix.target }}"
# Keep the helpers next to codex.exe in the runtime wheel so Windows
# sandbox/elevation lookup matches the standalone release zip.
python "${GITHUB_WORKSPACE}/sdk/python/scripts/update_sdk_artifacts.py" \
stage-runtime \
"$stage_dir" \
"${GITHUB_WORKSPACE}/codex-rs/target/${{ matrix.target }}/release/codex.exe" \
--codex-version "${GITHUB_REF_NAME}" \
--platform-tag "$platform_tag" \
--resource-binary "${GITHUB_WORKSPACE}/codex-rs/target/${{ matrix.target }}/release/codex-command-runner.exe" \
--resource-binary "${GITHUB_WORKSPACE}/codex-rs/target/${{ matrix.target }}/release/codex-windows-sandbox-setup.exe"
"${RUNNER_TEMP}/python-runtime-build-venv/Scripts/python.exe" -m build --wheel --outdir "$wheel_dir" "$stage_dir"
- name: Upload Python runtime wheel
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: python-runtime-wheel-${{ matrix.target }}
path: python-runtime-dist/${{ matrix.target }}/*.whl
if-no-files-found: error
- name: Install DotSlash
uses: facebook/install-dotslash@1e4e7b3e07eaca387acb98f1d4720e0bee8dbb6a # v2

View File

@@ -399,6 +399,65 @@ jobs:
cp target/${{ matrix.target }}/release/codex-${{ matrix.target }}.dmg "$dest/codex-${{ matrix.target }}.dmg"
fi
- name: Build Python runtime wheel
if: ${{ matrix.bundle == 'primary' }}
shell: bash
run: |
set -euo pipefail
case "${{ matrix.target }}" in
aarch64-apple-darwin)
platform_tag="macosx_11_0_arm64"
;;
x86_64-apple-darwin)
platform_tag="macosx_10_9_x86_64"
;;
aarch64-unknown-linux-musl)
platform_tag="musllinux_1_1_aarch64"
;;
x86_64-unknown-linux-musl)
platform_tag="musllinux_1_1_x86_64"
;;
*)
echo "No Python runtime wheel platform tag for ${{ matrix.target }}"
exit 1
;;
esac
python3 -m venv "${RUNNER_TEMP}/python-runtime-build-venv"
# Do not install into the runner's system Python; macOS runners mark
# the Homebrew Python as externally managed under PEP 668.
"${RUNNER_TEMP}/python-runtime-build-venv/bin/python" -m pip install build
stage_dir="${RUNNER_TEMP}/openai-codex-cli-bin-${{ matrix.target }}"
wheel_dir="${GITHUB_WORKSPACE}/python-runtime-dist/${{ matrix.target }}"
stage_runtime_args=(
"${GITHUB_WORKSPACE}/sdk/python/scripts/update_sdk_artifacts.py"
stage-runtime
"$stage_dir"
"${GITHUB_WORKSPACE}/codex-rs/target/${{ matrix.target }}/release/codex"
--codex-version "${GITHUB_REF_NAME}"
--platform-tag "$platform_tag"
)
if [[ "${{ matrix.target }}" == *linux* ]]; then
# Keep bwrap in the runtime wheel so Linux sandbox fallback behavior
# matches the standalone release bundle on hosts without system bwrap.
stage_runtime_args+=(
--resource-binary
"${GITHUB_WORKSPACE}/codex-rs/target/${{ matrix.target }}/release/bwrap"
)
fi
python3 "${stage_runtime_args[@]}"
"${RUNNER_TEMP}/python-runtime-build-venv/bin/python" -m build --wheel --outdir "$wheel_dir" "$stage_dir"
- name: Upload Python runtime wheel
if: ${{ matrix.bundle == 'primary' }}
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: python-runtime-wheel-${{ matrix.target }}
path: python-runtime-dist/${{ matrix.target }}/*.whl
if-no-files-found: error
- name: Compress artifacts
shell: bash
run: |
@@ -478,6 +537,7 @@ jobs:
tag: ${{ github.ref_name }}
should_publish_npm: ${{ steps.npm_publish_settings.outputs.should_publish }}
npm_tag: ${{ steps.npm_publish_settings.outputs.npm_tag }}
should_publish_python_runtime: ${{ steps.python_runtime_publish_settings.outputs.should_publish }}
steps:
- name: Checkout repository
@@ -554,6 +614,22 @@ jobs:
echo "npm_tag=" >> "$GITHUB_OUTPUT"
fi
- name: Determine Python runtime publish settings
id: python_runtime_publish_settings
env:
VERSION: ${{ steps.release_name.outputs.name }}
run: |
set -euo pipefail
version="${VERSION}"
if [[ "${version}" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "should_publish=true" >> "$GITHUB_OUTPUT"
elif [[ "${version}" =~ ^[0-9]+\.[0-9]+\.[0-9]+-alpha\.[0-9]+$ ]]; then
echo "should_publish=true" >> "$GITHUB_OUTPUT"
else
echo "should_publish=false" >> "$GITHUB_OUTPUT"
fi
- name: Setup pnpm
uses: pnpm/action-setup@a8198c4bff370c8506180b035930dea56dbd5288 # v5
with:
@@ -787,6 +863,48 @@ jobs:
exit "${publish_status}"
done
# Publish the platform-specific Python runtime wheels using PyPI trusted publishing.
# PyPI project configuration must trust this workflow and job. Keep this
# non-blocking while the Python runtime publishing path is new; failures still
# need release follow-up, but should not invalidate the Rust release itself.
publish-python-runtime:
# Publish to PyPI for stable releases and alpha pre-releases with numeric suffixes.
if: ${{ needs.release.outputs.should_publish_python_runtime == 'true' }}
name: publish-python-runtime
needs: release
runs-on: ubuntu-latest
continue-on-error: true
environment: pypi
permissions:
id-token: write # Required for PyPI trusted publishing.
contents: read
steps:
- name: Download Python runtime wheels from release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_TAG: ${{ needs.release.outputs.tag }}
RELEASE_VERSION: ${{ needs.release.outputs.version }}
run: |
set -euo pipefail
python_version="$RELEASE_VERSION"
python_version="${python_version/-alpha./a}"
python_version="${python_version/-beta./b}"
python_version="${python_version/-rc./rc}"
mkdir -p dist/python-runtime
gh release download "$RELEASE_TAG" \
--repo "${GITHUB_REPOSITORY}" \
--pattern "openai_codex_cli_bin-${python_version}-*.whl" \
--dir dist/python-runtime
ls -lh dist/python-runtime
- name: Publish Python runtime wheels to PyPI
uses: pypa/gh-action-pypi-publish@ed0c53931b1dc9bd32cbe73a98c7f6766f8a527e # v1.13.0
with:
packages-dir: dist/python-runtime
skip-existing: true
winget:
name: winget
needs: release

View File

@@ -6,6 +6,39 @@ on:
pull_request: {}
jobs:
python-sdk:
runs-on:
group: codex-runners
labels: codex-linux-x64
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
persist-credentials: false
- name: Test Python SDK
shell: bash
run: |
set -euo pipefail
# Run inside Alpine so dependency resolution exercises the pinned
# runtime wheel on the same Linux wheel family that CI installs.
docker run --rm \
--user "$(id -u):$(id -g)" \
-e HOME=/tmp/codex-python-sdk-home \
-e UV_LINK_MODE=copy \
-v "${GITHUB_WORKSPACE}:${GITHUB_WORKSPACE}" \
-w "${GITHUB_WORKSPACE}/sdk/python" \
python:3.12-alpine \
sh -euxc '
python -m venv /tmp/uv
/tmp/uv/bin/python -m pip install uv==0.11.3
/tmp/uv/bin/uv sync --extra dev --frozen
/tmp/uv/bin/uv run --extra dev pytest
'
sdks:
runs-on:
group: codex-runners

View File

@@ -1,6 +1,6 @@
# Codex CLI Runtime for Python SDK
Platform-specific runtime package consumed by the published `openai-codex-app-server-sdk`.
Platform-specific runtime package consumed by the published `openai-codex`.
This package is staged during release so the SDK can pin an exact Codex CLI
version without checking platform binaries into the repo.

View File

@@ -1,8 +1,12 @@
# Codex App Server Python SDK (Experimental)
# OpenAI Codex Python SDK (Experimental)
Experimental Python SDK for `codex app-server` JSON-RPC v2 over stdio, with a small default surface optimized for real scripts and apps.
The generated wire-model layer is currently sourced from the bundled v2 schema and exposed as Pydantic models with snake_case Python fields that serialize back to the app-servers camelCase wire format.
The generated wire-model layer is sourced from the pinned `openai-codex-cli-bin`
runtime package and exposed as Pydantic models with snake_case Python fields
that serialize back to the app-servers camelCase wire format.
The package root exports the ergonomic client API; public app-server value and
event types live in `openai_codex.types`.
## Install
@@ -21,7 +25,7 @@ automatically.
## Quickstart
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5")
@@ -68,10 +72,11 @@ notebook bootstrap the pinned runtime package automatically.
```bash
cd sdk/python
uv sync
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex-app-server-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
@@ -89,13 +94,13 @@ matrix is `macosx_11_0_arm64`, `macosx_10_9_x86_64`,
This supports the CI release flow:
- run `generate-types` before packaging
- stage `openai-codex-app-server-sdk` once with an exact `openai-codex-cli-bin==...` dependency
- stage `openai-codex` once with an exact `openai-codex-cli-bin==...` dependency
- stage `openai-codex-cli-bin` on each supported platform runner with the same pinned runtime version
- build and publish `openai-codex-cli-bin` as platform wheels only; do not publish an sdist
- build and publish `openai-codex-cli-bin` as platform wheels only through PyPI trusted publishing; do not publish an sdist
## Compatibility and versioning
- Package: `openai-codex-app-server-sdk`
- Package: `openai-codex`
- Runtime package: `openai-codex-cli-bin`
- Python: `>=3.10`
- Target protocol: Codex `app-server` JSON-RPC v2
@@ -107,4 +112,4 @@ This supports the CI release flow:
- Use context managers (`with Codex() as codex:`) to ensure shutdown.
- Prefer `thread.run("...")` for the common case. Use `thread.turn(...)` when
you need streaming, steering, or interrupt control.
- For transient overload, use `codex_app_server.retry.retry_on_overload`.
- For transient overload, use `retry_on_overload` from the package root.

View File

@@ -18,7 +18,7 @@ import zipfile
from pathlib import Path
PACKAGE_NAME = "openai-codex-cli-bin"
SDK_PACKAGE_NAME = "openai-codex-app-server-sdk"
SDK_PACKAGE_NAME = "openai-codex"
REPO_SLUG = "openai/codex"
@@ -27,16 +27,22 @@ class RuntimeSetupError(RuntimeError):
def pinned_runtime_version() -> str:
source_version = _source_tree_project_version()
if source_version is not None:
return _normalized_package_version(source_version)
"""Return the exact runtime version pinned by the SDK package dependency."""
source_pin = _source_tree_runtime_dependency_version()
if source_pin is not None:
return _normalized_package_version(source_pin)
try:
return _normalized_package_version(importlib.metadata.version(SDK_PACKAGE_NAME))
installed_pin = _installed_sdk_runtime_dependency_version()
except importlib.metadata.PackageNotFoundError as exc:
raise RuntimeSetupError(
f"Unable to resolve {SDK_PACKAGE_NAME} version for runtime pinning."
f"Unable to resolve {SDK_PACKAGE_NAME} metadata for runtime pinning."
) from exc
if installed_pin is None:
raise RuntimeSetupError(
f"Unable to resolve {PACKAGE_NAME} dependency pin from {SDK_PACKAGE_NAME}."
)
return _normalized_package_version(installed_pin)
def ensure_runtime_package_installed(
@@ -399,20 +405,33 @@ def _release_tag(version: str) -> str:
return f"rust-v{_codex_release_version(version)}"
def _source_tree_project_version() -> str | None:
def _source_tree_runtime_dependency_version() -> str | None:
"""Read the runtime dependency pin when the SDK is running from a checkout."""
pyproject_path = Path(__file__).resolve().parent / "pyproject.toml"
if not pyproject_path.exists():
return None
match = re.search(
r'(?m)^version = "([^"]+)"$',
pyproject_path.read_text(encoding="utf-8"),
)
match = re.search(_runtime_dependency_pin_pattern(), pyproject_path.read_text())
if match is None:
return None
return match.group(1)
def _installed_sdk_runtime_dependency_version() -> str | None:
"""Read the runtime dependency pin from installed package metadata."""
requirements = importlib.metadata.requires(SDK_PACKAGE_NAME) or []
for requirement in requirements:
match = re.search(_runtime_dependency_pin_pattern(), requirement)
if match is not None:
return match.group(1)
return None
def _runtime_dependency_pin_pattern() -> str:
"""Match the exact runtime dependency pin in TOML and wheel metadata."""
return rf'{re.escape(PACKAGE_NAME)}\s*==\s*"?([^",;\s]+)"?'
__all__ = [
"PACKAGE_NAME",
"SDK_PACKAGE_NAME",

View File

@@ -1,13 +1,14 @@
# Codex App Server SDK — API Reference
# OpenAI Codex SDK — API Reference
Public surface of `codex_app_server` for app-server v2.
Public surface of `openai_codex` for app-server v2.
This SDK surface is experimental. Turn streams are routed by turn ID so one client can consume multiple active turns concurrently.
Thread and turn starts currently send `AskForApproval.never` while SDK approval request handling is still pending.
## Package Entry
```python
from codex_app_server import (
from openai_codex import (
Codex,
AsyncCodex,
RunResult,
@@ -15,7 +16,6 @@ from codex_app_server import (
AsyncThread,
TurnHandle,
AsyncTurnHandle,
InitializeResponse,
Input,
InputItem,
TextInput,
@@ -23,14 +23,18 @@ from codex_app_server import (
LocalImageInput,
SkillInput,
MentionInput,
)
from openai_codex.types import (
InitializeResponse,
ThreadItem,
ThreadTokenUsage,
TurnStatus,
)
from codex_app_server.generated.v2_all import ThreadItem, ThreadTokenUsage
```
- Version: `codex_app_server.__version__`
- Version: `openai_codex.__version__`
- Requires Python >= 3.10
- Canonical generated app-server models live in `codex_app_server.generated.v2_all`
- Public app-server value and event types live in `openai_codex.types`
## Codex (sync)
@@ -124,7 +128,7 @@ object with:
phase-less assistant message item.
Use `turn(...)` when you need low-level turn control (`stream()`, `steer()`,
`interrupt()`) or the canonical generated `Turn` from `TurnHandle.run()`.
`interrupt()`) or the public `Turn` model from `TurnHandle.run()`.
## TurnHandle / AsyncTurnHandle
@@ -133,7 +137,7 @@ Use `turn(...)` when you need low-level turn control (`stream()`, `steer()`,
- `steer(input: Input) -> TurnSteerResponse`
- `interrupt() -> TurnInterruptResponse`
- `stream() -> Iterator[Notification]`
- `run() -> codex_app_server.generated.v2_all.Turn`
- `run() -> openai_codex.types.Turn`
Behavior notes:
@@ -145,7 +149,7 @@ Behavior notes:
- `steer(input: Input) -> Awaitable[TurnSteerResponse]`
- `interrupt() -> Awaitable[TurnInterruptResponse]`
- `stream() -> AsyncIterator[Notification]`
- `run() -> Awaitable[codex_app_server.generated.v2_all.Turn]`
- `run() -> Awaitable[openai_codex.types.Turn]`
Behavior notes:
@@ -165,16 +169,15 @@ InputItem = TextInput | ImageInput | LocalImageInput | SkillInput | MentionInput
Input = list[InputItem] | InputItem
```
## Generated Models
## Public Types
The SDK wrappers return and accept canonical generated app-server models wherever possible:
The SDK wrappers return and accept public app-server models wherever possible:
```python
from codex_app_server.generated.v2_all import (
from openai_codex.types import (
AskForApproval,
ThreadReadResponse,
Turn,
TurnStartParams,
TurnStatus,
)
```
@@ -182,7 +185,7 @@ from codex_app_server.generated.v2_all import (
## Retry + errors
```python
from codex_app_server import (
from openai_codex import (
retry_on_overload,
JsonRpcError,
MethodNotFoundError,
@@ -198,7 +201,7 @@ from codex_app_server import (
## Example
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -8,7 +8,7 @@
## `run()` vs `stream()`
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the canonical generated app-server `Turn` model.
- `TurnHandle.run()` / `AsyncTurnHandle.run()` is the easiest path. It consumes events until completion and returns the public app-server `Turn` model from `openai_codex.types`.
- `TurnHandle.stream()` / `AsyncTurnHandle.stream()` yields raw notifications (`Notification`) so you can react event-by-event.
Choose `run()` for most apps. Choose `stream()` for progress UIs, custom timeout logic, or custom parsing.
@@ -68,7 +68,7 @@ cd sdk/python
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex-app-server-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
@@ -99,5 +99,5 @@ Do not blindly retry all errors. For `InvalidParamsError` or `MethodNotFoundErro
- Starting a new thread for every prompt when you wanted continuity.
- Forgetting to `close()` (or not using context managers).
- Assuming `run()` returns extra SDK-only fields instead of the generated `Turn` model.
- Assuming `run()` returns extra SDK-only fields instead of the public `Turn` model.
- Mixing SDK input classes with raw dicts incorrectly.

View File

@@ -24,7 +24,7 @@ Requirements:
## 2) Run your first turn (sync)
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
server = codex.metadata.serverInfo
@@ -50,7 +50,7 @@ What happened:
## 3) Continue the same thread (multi-turn)
```python
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})
@@ -69,7 +69,7 @@ initializes lazily, and context entry makes startup/shutdown explicit.
```python
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:
@@ -85,7 +85,7 @@ asyncio.run(main())
## 5) Resume an existing thread
```python
from codex_app_server import Codex
from openai_codex import Codex
THREAD_ID = "thr_123" # replace with a real id
@@ -95,12 +95,13 @@ with Codex() as codex:
print(result.final_response)
```
## 6) Generated models
## 6) Public app-server types
The convenience wrappers live at the package root, but the canonical app-server models live under:
The convenience wrappers live at the package root. Public app-server value and
event types live under:
```python
from codex_app_server.generated.v2_all import Turn, TurnStatus, ThreadReadResponse
from openai_codex.types import ThreadReadResponse, Turn, TurnStatus
```
## 7) Next stops

View File

@@ -15,7 +15,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:

View File

@@ -13,7 +13,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex
from openai_codex import Codex
with Codex(config=runtime_config()) as codex:
print("Server:", server_label(codex.metadata))

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import ensure_local_sdk_src, runtime_config, server_label
ensure_local_sdk_src()
from codex_app_server import Codex
from openai_codex import Codex
with Codex(config=runtime_config()) as codex:
print("server:", server_label(codex.metadata))

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
# Create an initial thread and turn so we have a real thread to resume.

View File

@@ -11,7 +11,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -9,7 +9,7 @@ from _bootstrap import ensure_local_sdk_src, runtime_config
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:

View File

@@ -16,7 +16,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, ImageInput, TextInput
from openai_codex import AsyncCodex, ImageInput, TextInput
REMOTE_IMAGE_URL = "https://raw.githubusercontent.com/github/explore/main/topics/python/python.png"

View File

@@ -14,7 +14,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, ImageInput, TextInput
from openai_codex import Codex, ImageInput, TextInput
REMOTE_IMAGE_URL = "https://raw.githubusercontent.com/github/explore/main/topics/python/python.png"

View File

@@ -17,7 +17,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, LocalImageInput, TextInput
from openai_codex import AsyncCodex, LocalImageInput, TextInput
async def main() -> None:

View File

@@ -15,7 +15,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, LocalImageInput, TextInput
from openai_codex import Codex, LocalImageInput, TextInput
with temporary_sample_image_path() as image_path:
with Codex(config=runtime_config()) as codex:

View File

@@ -15,7 +15,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
print("Server:", server_label(codex.metadata))

View File

@@ -19,14 +19,14 @@ import random
from collections.abc import Awaitable, Callable
from typing import TypeVar
from codex_app_server import (
from openai_codex import (
AsyncCodex,
JsonRpcError,
ServerBusyError,
TextInput,
TurnStatus,
is_retryable_error,
)
from openai_codex.types import TurnStatus
ResultT = TypeVar("ResultT")

View File

@@ -14,14 +14,14 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import (
from openai_codex import (
Codex,
JsonRpcError,
ServerBusyError,
TextInput,
TurnStatus,
retry_on_overload,
)
from openai_codex.types import TurnStatus
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -11,9 +11,11 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
)

View File

@@ -9,9 +9,11 @@ from _bootstrap import ensure_local_sdk_src, runtime_config
ensure_local_sdk_src()
from codex_app_server import (
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
)

View File

@@ -17,12 +17,14 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
AskForApproval,
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningSummary,
TextInput,
)
OUTPUT_SCHEMA = {
@@ -44,7 +46,7 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
APPROVAL_POLICY = AskForApproval.never
async def main() -> None:

View File

@@ -15,12 +15,14 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import (
AskForApproval,
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningSummary,
TextInput,
)
OUTPUT_SCHEMA = {
@@ -42,7 +44,7 @@ PROMPT = (
"Analyze a safe rollout plan for enabling a feature flag in production. "
"Return JSON matching the requested schema."
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
APPROVAL_POLICY = AskForApproval.never
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -11,14 +11,16 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import (
AskForApproval,
from openai_codex import (
AsyncCodex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningEffort,
ReasoningSummary,
SandboxPolicy,
TextInput,
)
REASONING_RANK = {
@@ -73,7 +75,7 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
APPROVAL_POLICY = AskForApproval.never
async def main() -> None:

View File

@@ -9,14 +9,16 @@ from _bootstrap import assistant_text_from_turn, ensure_local_sdk_src, find_turn
ensure_local_sdk_src()
from codex_app_server import (
AskForApproval,
from openai_codex import (
Codex,
TextInput,
)
from openai_codex.types import (
AskForApproval,
Personality,
ReasoningEffort,
ReasoningSummary,
SandboxPolicy,
TextInput,
)
REASONING_RANK = {
@@ -71,7 +73,7 @@ SANDBOX_POLICY = SandboxPolicy.model_validate(
"access": {"type": "fullAccess"},
}
)
APPROVAL_POLICY = AskForApproval.model_validate("never")
APPROVAL_POLICY = AskForApproval.never
with Codex(config=runtime_config()) as codex:

View File

@@ -15,7 +15,7 @@ ensure_local_sdk_src()
import asyncio
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main() -> None:

View File

@@ -13,7 +13,7 @@ from _bootstrap import (
ensure_local_sdk_src()
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex(config=runtime_config()) as codex:
thread = codex.thread_start(model="gpt-5.4", config={"model_reasoning_effort": "high"})

View File

@@ -5,7 +5,8 @@ Each example folder contains runnable versions:
- `sync.py` (public sync surface: `Codex`)
- `async.py` (public async surface: `AsyncCodex`)
All examples intentionally use only public SDK exports from `codex_app_server`.
All examples intentionally use only public SDK exports from `openai_codex`
and `openai_codex.types`.
## Prerequisites
@@ -28,7 +29,7 @@ will download the matching GitHub release artifact, stage a temporary local
`openai-codex-cli-bin` package, install it into your active interpreter, and clean up
the temporary files afterward.
The pinned runtime version comes from the SDK package version.
The pinned runtime version comes from the SDK package dependency.
## Run examples

View File

@@ -35,7 +35,7 @@ def ensure_local_sdk_src() -> Path:
"""Add sdk/python/src to sys.path so examples run without installing the package."""
sdk_python_dir = _SDK_PYTHON_DIR
src_dir = sdk_python_dir / "src"
package_dir = src_dir / "codex_app_server"
package_dir = src_dir / "openai_codex"
if not package_dir.exists():
raise RuntimeError(f"Could not locate local SDK package at {package_dir}")
@@ -49,7 +49,7 @@ def ensure_local_sdk_src() -> Path:
def runtime_config():
"""Return an example-friendly AppServerConfig for repo-source SDK usage."""
from codex_app_server import AppServerConfig
from openai_codex import AppServerConfig
ensure_runtime_package_installed(sys.executable, _SDK_PYTHON_DIR)
return AppServerConfig()

View File

@@ -6,7 +6,7 @@
"source": [
"# Codex Python SDK Walkthrough\n",
"\n",
"Public SDK surface only (`codex_app_server` root exports)."
"Public SDK surface only (`openai_codex` root exports)."
]
},
{
@@ -32,7 +32,7 @@
"\n",
"\n",
"def _is_sdk_python_dir(path: Path) -> bool:\n",
" return (path / 'pyproject.toml').exists() and (path / 'src' / 'codex_app_server').exists()\n",
" return (path / 'pyproject.toml').exists() and (path / 'src' / 'openai_codex').exists()\n",
"\n",
"\n",
"def _iter_home_fallback_candidates(home: Path):\n",
@@ -114,7 +114,7 @@
"\n",
"# Force fresh imports after SDK upgrades in the same notebook kernel.\n",
"for module_name in list(sys.modules):\n",
" if module_name == 'codex_app_server' or module_name.startswith('codex_app_server.'):\n",
" if module_name == 'openai_codex' or module_name.startswith('openai_codex.'):\n",
" sys.modules.pop(module_name, None)\n",
"\n",
"print('Kernel:', sys.executable)\n",
@@ -130,7 +130,7 @@
"source": [
"# Cell 2: imports (public only)\n",
"from _bootstrap import assistant_text_from_turn, find_turn_by_id, server_label\n",
"from codex_app_server import (\n",
"from openai_codex import (\n",
" AsyncCodex,\n",
" Codex,\n",
" ImageInput,\n",
@@ -245,7 +245,7 @@
"source": [
"# Cell 5b: one turn with most optional turn params\n",
"from pathlib import Path\n",
"from codex_app_server import (\n",
"from openai_codex import (\n",
" AskForApproval,\n",
" Personality,\n",
" ReasoningEffort,\n",
@@ -270,7 +270,7 @@
" thread = codex.thread_start(model='gpt-5.4', config={'model_reasoning_effort': 'high'})\n",
" turn = thread.turn(\n",
" TextInput('Propose a safe production feature-flag rollout. Return JSON matching the schema.'),\n",
" approval_policy=AskForApproval.model_validate('never'),\n",
" approval_policy=AskForApproval.never,\n",
" cwd=str(Path.cwd()),\n",
" effort=ReasoningEffort.medium,\n",
" model='gpt-5.4',\n",
@@ -295,7 +295,7 @@
"source": [
"# Cell 5c: choose highest model + highest supported reasoning, then run turns\n",
"from pathlib import Path\n",
"from codex_app_server import (\n",
"from openai_codex import (\n",
" AskForApproval,\n",
" Personality,\n",
" ReasoningEffort,\n",
@@ -361,7 +361,7 @@
"\n",
" second = thread.turn(\n",
" TextInput('Return JSON for a safe feature-flag rollout plan.'),\n",
" approval_policy=AskForApproval.model_validate('never'),\n",
" approval_policy=AskForApproval.never,\n",
" cwd=str(Path.cwd()),\n",
" effort=selected_effort,\n",
" model=selected_model.model,\n",

View File

@@ -3,13 +3,13 @@ requires = ["hatchling>=1.24.0"]
build-backend = "hatchling.build"
[project]
name = "openai-codex-app-server-sdk"
version = "0.116.0a1"
name = "openai-codex"
version = "0.131.0a4"
description = "Python SDK for Codex app-server v2"
readme = "README.md"
requires-python = ">=3.10"
license = { text = "Apache-2.0" }
authors = [{ name = "OpenClaw Assistant" }]
authors = [{ name = "OpenAI" }]
keywords = ["codex", "json-rpc", "sdk", "llm", "app-server"]
classifiers = [
"Development Status :: 4 - Beta",
@@ -22,7 +22,7 @@ classifiers = [
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
]
dependencies = ["pydantic>=2.12"]
dependencies = ["pydantic>=2.12", "openai-codex-cli-bin==0.131.0a4"]
[project.urls]
Homepage = "https://github.com/openai/codex"
@@ -42,14 +42,14 @@ exclude = [
]
[tool.hatch.build.targets.wheel]
packages = ["src/codex_app_server"]
packages = ["src/openai_codex"]
include = [
"src/codex_app_server/py.typed",
"src/openai_codex/py.typed",
]
[tool.hatch.build.targets.sdist]
include = [
"src/codex_app_server/**",
"src/openai_codex/**",
"README.md",
"CHANGELOG.md",
"CONTRIBUTING.md",
@@ -63,8 +63,10 @@ testpaths = ["tests"]
[tool.uv]
exclude-newer = "7 days"
exclude-newer-package = { openai-codex-cli-bin = "2026-05-10T00:00:00Z" }
index-strategy = "first-index"
[tool.uv.pip]
exclude-newer = "7 days"
exclude-newer-package = { openai-codex-cli-bin = "2026-05-10T00:00:00Z" }
index-strategy = "first-index"

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import argparse
import importlib
import importlib.metadata
import json
import platform
import re
@@ -17,7 +18,7 @@ from dataclasses import dataclass
from pathlib import Path
from typing import Any, Callable, Sequence, get_args, get_origin
SDK_DISTRIBUTION_NAME = "openai-codex-app-server-sdk"
SDK_DISTRIBUTION_NAME = "openai-codex"
RUNTIME_DISTRIBUTION_NAME = "openai-codex-cli-bin"
@@ -33,19 +34,14 @@ def python_runtime_root() -> Path:
return repo_root() / "sdk" / "python-runtime"
def schema_bundle_path() -> Path:
return (
repo_root()
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
)
def sdk_pyproject_path() -> Path:
"""Return the SDK pyproject file that owns package pins and versions."""
return sdk_root() / "pyproject.toml"
def schema_root_dir() -> Path:
return repo_root() / "codex-rs" / "app-server-protocol" / "schema" / "json"
def schema_bundle_path(schema_dir: Path) -> Path:
"""Return the aggregate v2 schema bundle emitted by the runtime binary."""
return schema_dir / "codex_app_server_protocol.v2.schemas.json"
def _is_windows() -> bool:
@@ -61,6 +57,7 @@ def staged_runtime_bin_path(root: Path) -> Path:
def staged_runtime_resource_path(root: Path, resource: Path) -> Path:
"""Stage runtime helper binaries beside the main bundled Codex binary."""
# Runtime wheels include the whole bin/ directory, so helper executables
# should be staged beside the main Codex binary instead of changing the
# package template for each platform.
@@ -78,7 +75,7 @@ def run_python_module(module: str, args: list[str], cwd: Path) -> None:
def current_sdk_version() -> str:
match = re.search(
r'^version = "([^"]+)"$',
(sdk_root() / "pyproject.toml").read_text(),
sdk_pyproject_path().read_text(),
flags=re.MULTILINE,
)
if match is None:
@@ -86,6 +83,59 @@ def current_sdk_version() -> str:
return match.group(1)
def pinned_runtime_version() -> str:
"""Read the exact runtime package pin used for schema generation."""
pyproject_text = sdk_pyproject_path().read_text()
match = re.search(r"(?ms)^dependencies = \[(.*?)\]$", pyproject_text)
if match is None:
raise RuntimeError(
"Could not find dependencies array in sdk/python/pyproject.toml"
)
pins = re.findall(
rf'"{re.escape(RUNTIME_DISTRIBUTION_NAME)}==([^"]+)"',
match.group(1),
)
if len(pins) != 1:
raise RuntimeError(
f"Expected exactly one {RUNTIME_DISTRIBUTION_NAME} dependency pin "
"in sdk/python/pyproject.toml"
)
return normalize_codex_version(pins[0])
def pinned_runtime_codex_path() -> Path:
"""Return the bundled Codex binary from the installed pinned runtime wheel."""
expected_version = pinned_runtime_version()
try:
installed_version = importlib.metadata.version(RUNTIME_DISTRIBUTION_NAME)
except importlib.metadata.PackageNotFoundError as exc:
raise RuntimeError(
f"Install {RUNTIME_DISTRIBUTION_NAME}=={expected_version} before "
"generating Python SDK types."
) from exc
normalized_installed_version = normalize_codex_version(installed_version)
if normalized_installed_version != expected_version:
raise RuntimeError(
f"Expected {RUNTIME_DISTRIBUTION_NAME}=={expected_version}, "
f"but found {installed_version}."
)
try:
from codex_cli_bin import bundled_codex_path
except ImportError as exc:
raise RuntimeError(
f"Installed {RUNTIME_DISTRIBUTION_NAME} package does not expose "
"bundled_codex_path."
) from exc
codex_path = bundled_codex_path()
if not codex_path.exists():
raise RuntimeError(f"Pinned Codex runtime binary not found at {codex_path}.")
return codex_path
def normalize_codex_version(version: str) -> str:
normalized = version.strip()
if normalized.startswith("rust-v"):
@@ -200,7 +250,7 @@ def _rewrite_sdk_runtime_dependency(pyproject_text: str, runtime_version: str) -
def stage_python_sdk_package(staging_dir: Path, codex_version: str) -> Path:
package_version = normalize_codex_version(codex_version)
_copy_package_tree(sdk_root(), staging_dir)
sdk_bin_dir = staging_dir / "src" / "codex_app_server" / "bin"
sdk_bin_dir = staging_dir / "src" / "openai_codex" / "bin"
if sdk_bin_dir.exists():
shutil.rmtree(sdk_bin_dir)
@@ -487,8 +537,28 @@ def _annotate_schema(value: Any, base: str | None = None) -> None:
_annotate_schema(child, base)
def _normalized_schema_bundle_text() -> str:
schema = json.loads(schema_bundle_path().read_text())
def generate_schema_from_pinned_runtime(schema_dir: Path) -> Path:
"""Generate app-server schemas by invoking the installed pinned runtime binary."""
codex_path = pinned_runtime_codex_path()
if schema_dir.exists():
shutil.rmtree(schema_dir)
schema_dir.mkdir(parents=True)
run(
[
str(codex_path),
"app-server",
"generate-json-schema",
"--out",
str(schema_dir),
],
cwd=sdk_root(),
)
return schema_dir
def _normalized_schema_bundle_text(schema_dir: Path) -> str:
"""Normalize the schema bundle before feeding it to the Python type generator."""
schema = json.loads(schema_bundle_path(schema_dir).read_text())
definitions = schema.get("definitions", {})
if isinstance(definitions, dict):
for definition in definitions.values():
@@ -500,16 +570,17 @@ def _normalized_schema_bundle_text() -> str:
return json.dumps(schema, indent=2, sort_keys=True) + "\n"
def generate_v2_all() -> None:
out_path = sdk_root() / "src" / "codex_app_server" / "generated" / "v2_all.py"
def generate_v2_all(schema_dir: Path) -> None:
"""Regenerate the Pydantic v2 protocol model module from runtime schemas."""
out_path = sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
out_dir = out_path.parent
old_package_dir = out_dir / "v2_all"
if old_package_dir.exists():
shutil.rmtree(old_package_dir)
out_dir.mkdir(parents=True, exist_ok=True)
with tempfile.TemporaryDirectory() as td:
normalized_bundle = Path(td) / schema_bundle_path().name
normalized_bundle.write_text(_normalized_schema_bundle_text())
normalized_bundle = Path(td) / schema_bundle_path(schema_dir).name
normalized_bundle.write_text(_normalized_schema_bundle_text(schema_dir))
run_python_module(
"datamodel_code_generator",
[
@@ -544,15 +615,60 @@ def generate_v2_all() -> None:
cwd=sdk_root(),
)
_normalize_generated_timestamps(out_path)
_add_ask_for_approval_aliases(out_path)
def _notification_specs() -> list[tuple[str, str]]:
def _add_ask_for_approval_aliases(out_path: Path) -> None:
"""Add ergonomic approval policy constants to the generated RootModel class."""
source = out_path.read_text()
source = source.replace(
"from typing import Annotated, Any, Literal",
"from typing import Annotated, Any, ClassVar, Literal",
)
if "AskForApproval.never =" in source:
out_path.write_text(source)
return
needle = """class AskForApproval(RootModel[AskForApprovalValue | GranularAskForApproval]):
model_config = ConfigDict(
populate_by_name=True,
)
root: AskForApprovalValue | GranularAskForApproval
"""
replacement = """class AskForApproval(RootModel[AskForApprovalValue | GranularAskForApproval]):
model_config = ConfigDict(
populate_by_name=True,
)
root: AskForApprovalValue | GranularAskForApproval
untrusted: ClassVar[AskForApproval]
on_failure: ClassVar[AskForApproval]
on_request: ClassVar[AskForApproval]
never: ClassVar[AskForApproval]
AskForApproval.untrusted = AskForApproval(root=AskForApprovalValue.untrusted)
AskForApproval.on_failure = AskForApproval(root=AskForApprovalValue.on_failure)
AskForApproval.on_request = AskForApproval(root=AskForApprovalValue.on_request)
AskForApproval.never = AskForApproval(root=AskForApprovalValue.never)
"""
updated, count = source.replace(needle, replacement, 1), source.count(needle)
if count != 1:
raise RuntimeError("Could not add AskForApproval aliases to generated types")
out_path.write_text(updated)
def _notification_specs(schema_dir: Path) -> list[tuple[str, str]]:
"""Map each server notification method to its generated payload model class."""
server_notifications = json.loads(
(schema_root_dir() / "ServerNotification.json").read_text()
(schema_dir / "ServerNotification.json").read_text()
)
one_of = server_notifications.get("oneOf", [])
generated_source = (
sdk_root() / "src" / "codex_app_server" / "generated" / "v2_all.py"
sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
).read_text()
specs: list[tuple[str, str]] = []
@@ -586,10 +702,12 @@ def _notification_specs() -> list[tuple[str, str]]:
def _notification_turn_id_specs(
schema_dir: Path,
specs: list[tuple[str, str]],
) -> tuple[list[str], list[str]]:
"""Classify notification payloads by where their turn id is carried."""
server_notifications = json.loads(
(schema_root_dir() / "ServerNotification.json").read_text()
(schema_dir / "ServerNotification.json").read_text()
)
definitions = server_notifications.get("definitions", {})
if not isinstance(definitions, dict):
@@ -615,6 +733,7 @@ def _notification_turn_id_specs(
def _type_tuple_source(class_names: list[str]) -> str:
"""Render a generated tuple literal for notification payload classes."""
if not class_names:
return "()"
if len(class_names) == 1:
@@ -622,17 +741,21 @@ def _type_tuple_source(class_names: list[str]) -> str:
return "(\n" + "".join(f" {class_name},\n" for class_name in class_names) + ")"
def generate_notification_registry() -> None:
def generate_notification_registry(schema_dir: Path) -> None:
"""Regenerate notification dispatch metadata from the runtime notification schema."""
out = (
sdk_root()
/ "src"
/ "codex_app_server"
/ "openai_codex"
/ "generated"
/ "notification_registry.py"
)
specs = _notification_specs()
specs = _notification_specs(schema_dir)
class_names = sorted({class_name for _, class_name in specs})
direct_turn_id_types, nested_turn_types = _notification_turn_id_specs(specs)
direct_turn_id_types, nested_turn_types = _notification_turn_id_specs(
schema_dir,
specs,
)
lines = [
"# Auto-generated by scripts/update_sdk_artifacts.py",
@@ -666,6 +789,7 @@ def generate_notification_registry() -> None:
"",
"",
"def notification_turn_id(payload: BaseModel) -> str | None:",
' """Return the turn id carried by generated notification payload metadata."""',
" if isinstance(payload, DIRECT_TURN_ID_NOTIFICATION_TYPES):",
" return payload.turn_id if isinstance(payload.turn_id, str) else None",
" if isinstance(payload, NESTED_TURN_NOTIFICATION_TYPES):",
@@ -752,8 +876,12 @@ def _camel_to_snake(name: str) -> str:
def _load_public_fields(
module_name: str, class_name: str, *, exclude: set[str] | None = None
) -> list[PublicFieldSpec]:
"""Load generated model fields used to render the ergonomic public methods."""
exclude = exclude or set()
module = importlib.import_module(module_name)
if module_name == "openai_codex.generated.v2_all":
module = _load_generated_v2_all_module()
else:
module = importlib.import_module(module_name)
model = getattr(module, class_name)
fields: list[PublicFieldSpec] = []
for name, field in model.model_fields.items():
@@ -775,6 +903,20 @@ def _load_public_fields(
return fields
def _load_generated_v2_all_module() -> types.ModuleType:
"""Import the freshly generated v2_all module without importing package init."""
module_name = "_openai_codex_generated_v2_all_for_artifacts"
sys.modules.pop(module_name, None)
module_path = sdk_root() / "src" / "openai_codex" / "generated" / "v2_all.py"
spec = importlib.util.spec_from_file_location(module_name, module_path)
if spec is None or spec.loader is None:
raise RuntimeError(f"Failed to load generated module from {module_path}")
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
spec.loader.exec_module(module)
return module
def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
lines: list[str] = []
for field in fields:
@@ -786,7 +928,15 @@ def _kw_signature_lines(fields: list[PublicFieldSpec]) -> list[str]:
def _model_arg_lines(
fields: list[PublicFieldSpec], *, indent: str = " "
) -> list[str]:
return [f"{indent}{field.wire_name}={field.py_name}," for field in fields]
lines: list[str] = []
for field in fields:
value = field.py_name
if field.py_name == "approval_policy":
# TODO: Add a public approval callback API that lets callers return
# typed approval results, then honor caller-supplied policies.
value = "_approval_policy_never(approval_policy)"
lines.append(f"{indent}{field.wire_name}={value},")
return lines
def _replace_generated_block(source: str, block_name: str, body: str) -> str:
@@ -984,8 +1134,9 @@ def _render_async_thread_block(
def generate_public_api_flat_methods() -> None:
"""Regenerate the public convenience methods from generated protocol models."""
src_dir = sdk_root() / "src"
public_api_path = src_dir / "codex_app_server" / "api.py"
public_api_path = src_dir / "openai_codex" / "api.py"
if not public_api_path.exists():
# PR2 can run codegen before the ergonomic public API layer is added.
return
@@ -994,25 +1145,25 @@ def generate_public_api_flat_methods() -> None:
sys.path.insert(0, src_dir_str)
thread_start_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadStartParams",
)
thread_list_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadListParams",
)
thread_resume_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadResumeParams",
exclude={"thread_id"},
)
thread_fork_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"ThreadForkParams",
exclude={"thread_id"},
)
turn_start_fields = _load_public_fields(
"codex_app_server.generated.v2_all",
"openai_codex.generated.v2_all",
"TurnStartParams",
exclude={"thread_id", "input"},
)
@@ -1049,13 +1200,22 @@ def generate_public_api_flat_methods() -> None:
_render_async_thread_block(turn_start_fields),
)
public_api_path.write_text(source)
run_python_module("ruff", ["format", str(public_api_path)], cwd=sdk_root())
def generate_types_from_schema_dir(schema_dir: Path) -> None:
"""Regenerate every SDK artifact derived from an existing schema directory."""
# v2_all is the authoritative generated surface.
generate_v2_all(schema_dir)
generate_notification_registry(schema_dir)
generate_public_api_flat_methods()
def generate_types() -> None:
# v2_all is the authoritative generated surface.
generate_v2_all()
generate_notification_registry()
generate_public_api_flat_methods()
"""Generate schemas from the pinned runtime and then refresh SDK artifacts."""
with tempfile.TemporaryDirectory(prefix="codex-python-schema-") as td:
schema_dir = generate_schema_from_pinned_runtime(Path(td) / "schema")
generate_types_from_schema_dir(schema_dir)
def build_parser() -> argparse.ArgumentParser:

View File

@@ -1,5 +1,4 @@
from .async_client import AsyncAppServerClient
from .client import AppServerClient, AppServerConfig
from .client import AppServerConfig
from .errors import (
AppServerError,
AppServerRpcError,
@@ -14,29 +13,6 @@ from .errors import (
TransportClosedError,
is_retryable_error,
)
from .generated.v2_all import (
AskForApproval,
Personality,
PlanType,
ReasoningEffort,
ReasoningSummary,
SandboxMode,
SandboxPolicy,
ServiceTier,
ThreadItem,
ThreadForkParams,
ThreadListParams,
ThreadResumeParams,
ThreadSortKey,
ThreadSourceKind,
ThreadStartParams,
ThreadTokenUsageUpdatedNotification,
TurnCompletedNotification,
TurnStartParams,
TurnStatus,
TurnSteerParams,
)
from .models import InitializeResponse
from .api import (
AsyncCodex,
AsyncThread,
@@ -58,8 +34,6 @@ from ._version import __version__
__all__ = [
"__version__",
"AppServerClient",
"AsyncAppServerClient",
"AppServerConfig",
"Codex",
"AsyncCodex",
@@ -67,7 +41,6 @@ __all__ = [
"AsyncThread",
"TurnHandle",
"AsyncTurnHandle",
"InitializeResponse",
"RunResult",
"Input",
"InputItem",
@@ -76,26 +49,6 @@ __all__ = [
"LocalImageInput",
"SkillInput",
"MentionInput",
"ThreadItem",
"ThreadTokenUsageUpdatedNotification",
"TurnCompletedNotification",
"AskForApproval",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"ServiceTier",
"ThreadStartParams",
"ThreadResumeParams",
"ThreadListParams",
"ThreadSortKey",
"ThreadSourceKind",
"ThreadForkParams",
"TurnStatus",
"TurnStartParams",
"TurnSteerParams",
"retry_on_overload",
"AppServerError",
"TransportClosedError",

View File

@@ -22,6 +22,7 @@ class MessageRouter:
"""
def __init__(self) -> None:
"""Create empty response, turn, and global notification queues."""
self._lock = threading.Lock()
self._response_waiters: dict[str, queue.Queue[ResponseQueueItem]] = {}
self._turn_notifications: dict[str, queue.Queue[NotificationQueueItem]] = {}
@@ -144,6 +145,7 @@ class MessageRouter:
self._global_notifications.put(exc)
def _notification_turn_id(self, notification: Notification) -> str | None:
"""Extract routing ids from known generated payloads or raw unknown payloads."""
payload = notification.payload
if isinstance(payload, UnknownNotification):
raw_turn_id = payload.params.get("turnId")

View File

@@ -5,7 +5,7 @@ from importlib.metadata import PackageNotFoundError
from importlib.metadata import version as distribution_version
from pathlib import Path
DISTRIBUTION_NAME = "openai-codex-app-server-sdk"
DISTRIBUTION_NAME = "openai-codex"
UNKNOWN_VERSION = "0+unknown"

View File

@@ -15,7 +15,6 @@ from .generated.v2_all import (
ReasoningSummary,
SandboxMode,
SandboxPolicy,
ServiceTier,
SortDirection,
ThreadArchiveResponse,
ThreadCompactStartResponse,
@@ -27,6 +26,7 @@ from .generated.v2_all import (
ThreadResumeParams,
ThreadSetNameResponse,
ThreadSortKey,
ThreadSource,
ThreadSourceKind,
ThreadStartSource,
ThreadStartParams,
@@ -69,6 +69,12 @@ def _split_user_agent(user_agent: str) -> tuple[str | None, str | None]:
return raw, None
def _approval_policy_never(_approval_policy: AskForApproval | None) -> AskForApproval:
# TODO: Add a public approval callback API that lets callers return typed
# approval results, then honor caller-supplied policies.
return AskForApproval.never
class Codex:
"""Minimal typed SDK surface for app-server v2."""
@@ -152,11 +158,12 @@ class Codex:
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_name: str | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
session_start_source: ThreadStartSource | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
params = ThreadStartParams(
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -170,6 +177,7 @@ class Codex:
service_name=service_name,
service_tier=service_tier,
session_start_source=session_start_source,
thread_source=thread_source,
)
started = self._client.thread_start(params)
return Thread(self._client, started.thread.id)
@@ -216,11 +224,11 @@ class Codex:
model_provider: str | None = None,
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
) -> Thread:
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -249,11 +257,12 @@ class Codex:
model: str | None = None,
model_provider: str | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
thread_source: ThreadSource | None = None,
) -> Thread:
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -264,6 +273,7 @@ class Codex:
model_provider=model_provider,
sandbox=sandbox,
service_tier=service_tier,
thread_source=thread_source,
)
forked = self._client.thread_fork(thread_id, params)
return Thread(self._client, forked.thread.id)
@@ -349,12 +359,13 @@ class AsyncCodex:
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_name: str | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
session_start_source: ThreadStartSource | None = None,
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
params = ThreadStartParams(
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -368,6 +379,7 @@ class AsyncCodex:
service_name=service_name,
service_tier=service_tier,
session_start_source=session_start_source,
thread_source=thread_source,
)
started = await self._client.thread_start(params)
return AsyncThread(self, started.thread.id)
@@ -415,12 +427,12 @@ class AsyncCodex:
model_provider: str | None = None,
personality: Personality | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
) -> AsyncThread:
await self._ensure_initialized()
params = ThreadResumeParams(
thread_id=thread_id,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -449,12 +461,13 @@ class AsyncCodex:
model: str | None = None,
model_provider: str | None = None,
sandbox: SandboxMode | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
thread_source: ThreadSource | None = None,
) -> AsyncThread:
await self._ensure_initialized()
params = ThreadForkParams(
thread_id=thread_id,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
base_instructions=base_instructions,
config=config,
@@ -465,6 +478,7 @@ class AsyncCodex:
model_provider=model_provider,
sandbox=sandbox,
service_tier=service_tier,
thread_source=thread_source,
)
forked = await self._client.thread_fork(thread_id, params)
return AsyncThread(self, forked.thread.id)
@@ -502,12 +516,12 @@ class Thread:
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> RunResult:
turn = self.turn(
_normalize_run_input(input),
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,
@@ -537,14 +551,14 @@ class Thread:
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> TurnHandle:
wire_input = _to_wire_input(input)
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,
@@ -587,12 +601,12 @@ class AsyncThread:
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> RunResult:
turn = await self.turn(
_normalize_run_input(input),
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,
@@ -622,7 +636,7 @@ class AsyncThread:
output_schema: JsonObject | None = None,
personality: Personality | None = None,
sandbox_policy: SandboxPolicy | None = None,
service_tier: ServiceTier | None = None,
service_tier: str | None = None,
summary: ReasoningSummary | None = None,
) -> AsyncTurnHandle:
await self._codex._ensure_initialized()
@@ -630,7 +644,7 @@ class AsyncThread:
params = TurnStartParams(
thread_id=self.id,
input=wire_input,
approval_policy=approval_policy,
approval_policy=_approval_policy_never(approval_policy),
approvals_reviewer=approvals_reviewer,
cwd=cwd,
effort=effort,
@@ -678,6 +692,7 @@ class TurnHandle:
return self._client.turn_interrupt(self.thread_id, self.id)
def stream(self) -> Iterator[Notification]:
"""Yield only notifications routed to this turn handle."""
self._client.register_turn_notifications(self.id)
try:
while True:
@@ -730,6 +745,7 @@ class AsyncTurnHandle:
return await self._codex._client.turn_interrupt(self.thread_id, self.id)
async def stream(self) -> AsyncIterator[Notification]:
"""Yield only notifications routed to this async turn handle."""
await self._codex._ensure_initialized()
self._codex._client.register_turn_notifications(self.id)
try:

View File

@@ -40,13 +40,16 @@ class AsyncAppServerClient:
"""Async wrapper around AppServerClient using thread offloading."""
def __init__(self, config: AppServerConfig | None = None) -> None:
"""Create the wrapped sync client that owns the transport process."""
self._sync = AppServerClient(config=config)
async def __aenter__(self) -> "AsyncAppServerClient":
"""Start the app-server process when entering an async context."""
await self.start()
return self
async def __aexit__(self, _exc_type, _exc, _tb) -> None:
"""Close the app-server process when leaving an async context."""
await self.close()
async def _call_sync(
@@ -56,30 +59,37 @@ class AsyncAppServerClient:
*args: ParamsT.args,
**kwargs: ParamsT.kwargs,
) -> ReturnT:
"""Run a blocking sync-client operation without blocking the event loop."""
return await asyncio.to_thread(fn, *args, **kwargs)
@staticmethod
def _next_from_iterator(
iterator: Iterator[AgentMessageDeltaNotification],
) -> tuple[bool, AgentMessageDeltaNotification | None]:
"""Convert StopIteration into a value that can cross asyncio.to_thread."""
try:
return True, next(iterator)
except StopIteration:
return False, None
async def start(self) -> None:
"""Start the wrapped sync client in a worker thread."""
await self._call_sync(self._sync.start)
async def close(self) -> None:
"""Close the wrapped sync client in a worker thread."""
await self._call_sync(self._sync.close)
async def initialize(self) -> InitializeResponse:
"""Initialize the app-server session."""
return await self._call_sync(self._sync.initialize)
def register_turn_notifications(self, turn_id: str) -> None:
"""Register a turn notification queue on the wrapped sync client."""
self._sync.register_turn_notifications(turn_id)
def unregister_turn_notifications(self, turn_id: str) -> None:
"""Unregister a turn notification queue on the wrapped sync client."""
self._sync.unregister_turn_notifications(turn_id)
async def request(
@@ -89,6 +99,7 @@ class AsyncAppServerClient:
*,
response_model: type[ModelT],
) -> ModelT:
"""Send a typed JSON-RPC request through the wrapped sync client."""
return await self._call_sync(
self._sync.request,
method,
@@ -99,6 +110,7 @@ class AsyncAppServerClient:
async def thread_start(
self, params: V2ThreadStartParams | JsonObject | None = None
) -> ThreadStartResponse:
"""Start a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_start, params)
async def thread_resume(
@@ -106,16 +118,19 @@ class AsyncAppServerClient:
thread_id: str,
params: V2ThreadResumeParams | JsonObject | None = None,
) -> ThreadResumeResponse:
"""Resume a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_resume, thread_id, params)
async def thread_list(
self, params: V2ThreadListParams | JsonObject | None = None
) -> ThreadListResponse:
"""List threads using the wrapped sync client."""
return await self._call_sync(self._sync.thread_list, params)
async def thread_read(
self, thread_id: str, include_turns: bool = False
) -> ThreadReadResponse:
"""Read a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_read, thread_id, include_turns)
async def thread_fork(
@@ -123,18 +138,23 @@ class AsyncAppServerClient:
thread_id: str,
params: V2ThreadForkParams | JsonObject | None = None,
) -> ThreadForkResponse:
"""Fork a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_fork, thread_id, params)
async def thread_archive(self, thread_id: str) -> ThreadArchiveResponse:
"""Archive a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_archive, thread_id)
async def thread_unarchive(self, thread_id: str) -> ThreadUnarchiveResponse:
"""Unarchive a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_unarchive, thread_id)
async def thread_set_name(self, thread_id: str, name: str) -> ThreadSetNameResponse:
"""Rename a thread using the wrapped sync client."""
return await self._call_sync(self._sync.thread_set_name, thread_id, name)
async def thread_compact(self, thread_id: str) -> ThreadCompactStartResponse:
"""Start thread compaction using the wrapped sync client."""
return await self._call_sync(self._sync.thread_compact, thread_id)
async def turn_start(
@@ -143,6 +163,7 @@ class AsyncAppServerClient:
input_items: list[JsonObject] | JsonObject | str,
params: V2TurnStartParams | JsonObject | None = None,
) -> TurnStartResponse:
"""Start a turn using the wrapped sync client."""
return await self._call_sync(
self._sync.turn_start, thread_id, input_items, params
)
@@ -150,6 +171,7 @@ class AsyncAppServerClient:
async def turn_interrupt(
self, thread_id: str, turn_id: str
) -> TurnInterruptResponse:
"""Interrupt a turn using the wrapped sync client."""
return await self._call_sync(self._sync.turn_interrupt, thread_id, turn_id)
async def turn_steer(
@@ -158,6 +180,7 @@ class AsyncAppServerClient:
expected_turn_id: str,
input_items: list[JsonObject] | JsonObject | str,
) -> TurnSteerResponse:
"""Send steering input to a turn using the wrapped sync client."""
return await self._call_sync(
self._sync.turn_steer,
thread_id,
@@ -166,6 +189,7 @@ class AsyncAppServerClient:
)
async def model_list(self, include_hidden: bool = False) -> ModelListResponse:
"""List models using the wrapped sync client."""
return await self._call_sync(self._sync.model_list, include_hidden)
async def request_with_retry_on_overload(
@@ -178,6 +202,7 @@ class AsyncAppServerClient:
initial_delay_s: float = 0.25,
max_delay_s: float = 2.0,
) -> ModelT:
"""Send a typed request with the sync client's overload retry policy."""
return await self._call_sync(
self._sync.request_with_retry_on_overload,
method,
@@ -189,12 +214,15 @@ class AsyncAppServerClient:
)
async def next_notification(self) -> Notification:
"""Wait for the next global notification without blocking the event loop."""
return await self._call_sync(self._sync.next_notification)
async def next_turn_notification(self, turn_id: str) -> Notification:
"""Wait for the next notification routed to one turn."""
return await self._call_sync(self._sync.next_turn_notification, turn_id)
async def wait_for_turn_completed(self, turn_id: str) -> TurnCompletedNotification:
"""Wait for the completion notification routed to one turn."""
return await self._call_sync(self._sync.wait_for_turn_completed, turn_id)
async def stream_text(
@@ -203,6 +231,7 @@ class AsyncAppServerClient:
text: str,
params: V2TurnStartParams | JsonObject | None = None,
) -> AsyncIterator[AgentMessageDeltaNotification]:
"""Stream text deltas from one turn without monopolizing the event loop."""
iterator = self._sync.stream_text(thread_id, text, params)
while True:
has_value, chunk = await asyncio.to_thread(

View File

@@ -243,6 +243,7 @@ class AppServerClient:
return response_model.model_validate(result)
def _request_raw(self, method: str, params: JsonObject | None = None) -> JsonValue:
"""Send a JSON-RPC request and wait for the reader thread to route its response."""
request_id = str(uuid.uuid4())
waiter = self._router.create_response_waiter(request_id)
@@ -260,18 +261,23 @@ class AppServerClient:
return item
def notify(self, method: str, params: JsonObject | None = None) -> None:
"""Send a JSON-RPC notification without waiting for a response."""
self._write_message({"method": method, "params": params or {}})
def next_notification(self) -> Notification:
"""Return the next notification that is not scoped to an active turn."""
return self._router.next_global_notification()
def register_turn_notifications(self, turn_id: str) -> None:
"""Start routing notifications for one turn into its dedicated queue."""
self._router.register_turn(turn_id)
def unregister_turn_notifications(self, turn_id: str) -> None:
"""Stop routing notifications for one turn into its dedicated queue."""
self._router.unregister_turn(turn_id)
def next_turn_notification(self, turn_id: str) -> Notification:
"""Return the next routed notification for the requested turn id."""
return self._router.next_turn_notification(turn_id)
def thread_start(
@@ -349,6 +355,7 @@ class AppServerClient:
input_items: list[JsonObject] | JsonObject | str,
params: V2TurnStartParams | JsonObject | None = None,
) -> TurnStartResponse:
"""Start a turn and register its notification queue as early as possible."""
payload = {
**_params_dict(params),
"threadId": thread_id,
@@ -406,6 +413,7 @@ class AppServerClient:
)
def wait_for_turn_completed(self, turn_id: str) -> TurnCompletedNotification:
"""Block on the routed turn stream until the matching completion arrives."""
self.register_turn_notifications(turn_id)
try:
while True:
@@ -425,6 +433,7 @@ class AppServerClient:
text: str,
params: V2TurnStartParams | JsonObject | None = None,
) -> Iterator[AgentMessageDeltaNotification]:
"""Start a text turn and yield only its agent-message delta payloads."""
started = self.turn_start(thread_id, text, params=params)
turn_id = started.turn.id
self.register_turn_notifications(turn_id)
@@ -477,6 +486,7 @@ class AppServerClient:
def _default_approval_handler(
self, method: str, params: JsonObject | None
) -> JsonObject:
"""Accept approval requests when the caller did not provide a handler."""
if method == "item/commandExecution/requestApproval":
return {"decision": "accept"}
if method == "item/fileChange/requestApproval":
@@ -498,6 +508,7 @@ class AppServerClient:
self._stderr_thread.start()
def _start_reader_thread(self) -> None:
"""Start the sole stdout reader that fans messages into router queues."""
if self._proc is None or self._proc.stdout is None:
return
@@ -505,6 +516,7 @@ class AppServerClient:
self._reader_thread.start()
def _reader_loop(self) -> None:
"""Continuously classify transport messages into requests, responses, and events."""
try:
while True:
msg = self._read_message()

View File

@@ -35,6 +35,8 @@ from .v2_all import McpToolCallProgressNotification
from .v2_all import ModelReroutedNotification
from .v2_all import ModelVerificationNotification
from .v2_all import PlanDeltaNotification
from .v2_all import ProcessExitedNotification
from .v2_all import ProcessOutputDeltaNotification
from .v2_all import ReasoningSummaryPartAddedNotification
from .v2_all import ReasoningSummaryTextDeltaNotification
from .v2_all import ReasoningTextDeltaNotification
@@ -101,6 +103,8 @@ NOTIFICATION_MODELS: dict[str, type[BaseModel]] = {
"mcpServer/startupStatus/updated": McpServerStatusUpdatedNotification,
"model/rerouted": ModelReroutedNotification,
"model/verification": ModelVerificationNotification,
"process/exited": ProcessExitedNotification,
"process/outputDelta": ProcessOutputDeltaNotification,
"remoteControl/status/changed": RemoteControlStatusChangedNotification,
"serverRequest/resolved": ServerRequestResolvedNotification,
"skills/changed": SkillsChangedNotification,
@@ -165,6 +169,7 @@ NESTED_TURN_NOTIFICATION_TYPES: tuple[type[BaseModel], ...] = (
def notification_turn_id(payload: BaseModel) -> str | None:
"""Return the turn id carried by generated notification payload metadata."""
if isinstance(payload, DIRECT_TURN_ID_NOTIFICATION_TYPES):
return payload.turn_id if isinstance(payload.turn_id, str) else None
if isinstance(payload, NESTED_TURN_NOTIFICATION_TYPES):

View File

@@ -0,0 +1,69 @@
"""Public generated app-server model exports for type annotations and matching."""
from __future__ import annotations
from .generated.v2_all import (
ApprovalsReviewer,
AskForApproval,
ModelListResponse,
Personality,
PlanType,
ReasoningEffort,
ReasoningSummary,
SandboxMode,
SandboxPolicy,
SortDirection,
ThreadArchiveResponse,
ThreadCompactStartResponse,
ThreadItem,
ThreadListCwdFilter,
ThreadListResponse,
ThreadReadResponse,
ThreadSetNameResponse,
ThreadSortKey,
ThreadSource,
ThreadSourceKind,
ThreadStartSource,
ThreadTokenUsage,
ThreadTokenUsageUpdatedNotification,
Turn,
TurnCompletedNotification,
TurnInterruptResponse,
TurnStatus,
TurnSteerResponse,
)
from .models import InitializeResponse, JsonObject, Notification
__all__ = [
"ApprovalsReviewer",
"AskForApproval",
"InitializeResponse",
"JsonObject",
"ModelListResponse",
"Notification",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"SortDirection",
"ThreadArchiveResponse",
"ThreadCompactStartResponse",
"ThreadItem",
"ThreadListCwdFilter",
"ThreadListResponse",
"ThreadReadResponse",
"ThreadSetNameResponse",
"ThreadSortKey",
"ThreadSource",
"ThreadSourceKind",
"ThreadStartSource",
"ThreadTokenUsage",
"ThreadTokenUsageUpdatedNotification",
"Turn",
"TurnCompletedNotification",
"TurnInterruptResponse",
"TurnStatus",
"TurnSteerResponse",
]

View File

@@ -12,5 +12,5 @@ if src_str in sys.path:
sys.path.insert(0, src_str)
for module_name in list(sys.modules):
if module_name == "codex_app_server" or module_name.startswith("codex_app_server."):
if module_name == "openai_codex" or module_name.startswith("openai_codex."):
sys.modules.pop(module_name)

View File

@@ -16,6 +16,7 @@ ROOT = Path(__file__).resolve().parents[1]
def _load_update_script_module():
"""Load the maintenance script as a module so tests exercise real helpers."""
script_path = ROOT / "scripts" / "update_sdk_artifacts.py"
spec = importlib.util.spec_from_file_location("update_sdk_artifacts", script_path)
if spec is None or spec.loader is None:
@@ -27,6 +28,7 @@ def _load_update_script_module():
def _load_runtime_setup_module():
"""Load runtime setup without importing the SDK package under test."""
runtime_setup_path = ROOT / "_runtime_setup.py"
spec = importlib.util.spec_from_file_location("_runtime_setup", runtime_setup_path)
if spec is None or spec.loader is None:
@@ -40,11 +42,13 @@ def _load_runtime_setup_module():
def test_generation_has_single_maintenance_entrypoint_script() -> None:
"""Keep artifact workflows routed through one script instead of side entrypoints."""
scripts = sorted(p.name for p in (ROOT / "scripts").glob("*.py"))
assert scripts == ["update_sdk_artifacts.py"]
def test_generate_types_wires_all_generation_steps() -> None:
"""The type generation command should refresh every schema-derived artifact."""
source = (ROOT / "scripts" / "update_sdk_artifacts.py").read_text()
tree = ast.parse(source)
@@ -52,7 +56,8 @@ def test_generate_types_wires_all_generation_steps() -> None:
(
node
for node in tree.body
if isinstance(node, ast.FunctionDef) and node.name == "generate_types"
if isinstance(node, ast.FunctionDef)
and node.name == "generate_types_from_schema_dir"
),
None,
)
@@ -72,19 +77,19 @@ def test_generate_types_wires_all_generation_steps() -> None:
]
def test_schema_normalization_only_flattens_string_literal_oneofs() -> None:
def _load_runtime_schema_bundle(tmp_path: Path) -> dict:
"""Ask the pinned runtime package for a real schema bundle used by tests."""
script = _load_update_script_module()
schema = json.loads(
(
ROOT.parent.parent
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
).read_text()
)
schema_dir = script.generate_schema_from_pinned_runtime(tmp_path / "schema")
return json.loads(script.schema_bundle_path(schema_dir).read_text())
def test_schema_normalization_only_flattens_string_literal_oneofs(
tmp_path: Path,
) -> None:
"""Schema normalization should only flatten the enum-shaped oneOf variants."""
script = _load_update_script_module()
schema = _load_runtime_schema_bundle(tmp_path)
definitions = schema["definitions"]
flattened = [
name
@@ -94,27 +99,23 @@ def test_schema_normalization_only_flattens_string_literal_oneofs() -> None:
]
assert flattened == [
"AuthMode",
"CommandExecOutputStream",
"ExperimentalFeatureStage",
"InputModality",
"MessagePhase",
"TurnItemsView",
"PluginAvailability",
"AuthMode",
"InputModality",
"ExperimentalFeatureStage",
"CommandExecOutputStream",
"ProcessOutputStream",
]
def test_python_codegen_schema_annotation_adds_stable_variant_titles() -> None:
def test_python_codegen_schema_annotation_adds_stable_variant_titles(
tmp_path: Path,
) -> None:
"""Schema annotations should give generated protocol classes stable names."""
script = _load_update_script_module()
schema = json.loads(
(
ROOT.parent.parent
/ "codex-rs"
/ "app-server-protocol"
/ "schema"
/ "json"
/ "codex_app_server_protocol.v2.schemas.json"
).read_text()
)
schema = _load_runtime_schema_bundle(tmp_path)
script._annotate_schema(schema)
definitions = schema["definitions"]
@@ -163,19 +164,20 @@ def test_runtime_package_template_has_no_checked_in_binaries() -> None:
def test_examples_readme_points_to_runtime_version_source_of_truth() -> None:
"""Document that examples should point at the dependency pin, not release lore."""
readme = (ROOT / "examples" / "README.md").read_text()
assert "The pinned runtime version comes from the SDK package version." in readme
assert "The pinned runtime version comes from the SDK package dependency." in readme
def test_runtime_distribution_name_is_consistent() -> None:
script = _load_update_script_module()
runtime_setup = _load_runtime_setup_module()
from codex_app_server import client as client_module
from codex_app_server import _version
from openai_codex import client as client_module
from openai_codex import _version
assert script.SDK_DISTRIBUTION_NAME == "openai-codex-app-server-sdk"
assert runtime_setup.SDK_PACKAGE_NAME == "openai-codex-app-server-sdk"
assert _version.DISTRIBUTION_NAME == "openai-codex-app-server-sdk"
assert script.SDK_DISTRIBUTION_NAME == "openai-codex"
assert runtime_setup.SDK_PACKAGE_NAME == "openai-codex"
assert _version.DISTRIBUTION_NAME == "openai-codex"
assert script.RUNTIME_DISTRIBUTION_NAME == "openai-codex-cli-bin"
assert runtime_setup.PACKAGE_NAME == "openai-codex-cli-bin"
assert client_module.RUNTIME_PKG_NAME == "openai-codex-cli-bin"
@@ -185,6 +187,25 @@ def test_runtime_distribution_name_is_consistent() -> None:
)
def test_source_sdk_package_pins_published_runtime() -> None:
"""The source package metadata should pin the runtime wheel that ships schemas."""
script = _load_update_script_module()
pyproject = tomllib.loads((ROOT / "pyproject.toml").read_text())
assert {
"sdk_version": pyproject["project"]["version"],
"runtime_pin": script.pinned_runtime_version(),
"dependencies": pyproject["project"]["dependencies"],
} == {
"sdk_version": "0.131.0a4",
"runtime_pin": "0.131.0a4",
"dependencies": [
"pydantic>=2.12",
"openai-codex-cli-bin==0.131.0a4",
],
}
def test_release_metadata_retries_without_invalid_auth(
monkeypatch: pytest.MonkeyPatch,
) -> None:
@@ -212,11 +233,16 @@ def test_release_metadata_retries_without_invalid_auth(
def test_runtime_setup_uses_pep440_package_version_and_codex_release_tags() -> None:
"""The SDK uses PEP 440 package pins and converts only when fetching releases."""
runtime_setup = _load_runtime_setup_module()
pyproject = tomllib.loads((ROOT / "pyproject.toml").read_text())
assert runtime_setup.PACKAGE_NAME == "openai-codex-cli-bin"
assert runtime_setup.pinned_runtime_version() == pyproject["project"]["version"]
assert (
f"{runtime_setup.PACKAGE_NAME}=={pyproject['project']['version']}"
in pyproject["project"]["dependencies"]
)
assert (
runtime_setup._normalized_package_version("rust-v0.116.0-alpha.1")
== "0.116.0a1"
@@ -352,6 +378,7 @@ def test_stage_runtime_release_can_pin_wheel_platform_tag(tmp_path: Path) -> Non
def test_stage_runtime_release_copies_resource_binaries(tmp_path: Path) -> None:
"""Runtime staging should copy every helper binary into the wheel bin dir."""
script = _load_update_script_module()
fake_binary = tmp_path / script.runtime_binary_name()
helper = tmp_path / "helper"
@@ -382,6 +409,7 @@ def test_stage_runtime_release_copies_resource_binaries(tmp_path: Path) -> None:
def test_runtime_resource_binaries_are_included_by_wheel_config(
tmp_path: Path,
) -> None:
"""The runtime wheel config should include helper binaries beside Codex."""
script = _load_update_script_module()
fake_binary = tmp_path / script.runtime_binary_name()
helper = tmp_path / "helper"
@@ -398,9 +426,7 @@ def test_runtime_resource_binaries_are_included_by_wheel_config(
pyproject = tomllib.loads((staged / "pyproject.toml").read_text())
assert {
"include": pyproject["tool"]["hatch"]["build"]["targets"]["wheel"]["include"],
"helper": (
staged / "src" / "codex_cli_bin" / "bin" / "helper"
).read_text(),
"helper": (staged / "src" / "codex_cli_bin" / "bin" / "helper").read_text(),
} == {
"include": ["src/codex_cli_bin/bin/**"],
"helper": "fake helper\n",
@@ -415,18 +441,18 @@ def test_stage_sdk_release_injects_exact_runtime_pin(tmp_path: Path) -> None:
)
pyproject = (staged / "pyproject.toml").read_text()
assert 'name = "openai-codex-app-server-sdk"' in pyproject
assert 'name = "openai-codex"' in pyproject
assert 'version = "0.116.0a1"' in pyproject
assert '"openai-codex-cli-bin==0.116.0a1"' in pyproject
assert (
'__version__ = "0.116.0a1"'
not in (staged / "src" / "codex_app_server" / "__init__.py").read_text()
not in (staged / "src" / "openai_codex" / "__init__.py").read_text()
)
assert (
'client_version: str = "0.116.0a1"'
not in (staged / "src" / "codex_app_server" / "client.py").read_text()
not in (staged / "src" / "openai_codex" / "client.py").read_text()
)
assert not any((staged / "src" / "codex_app_server").glob("bin/**"))
assert not any((staged / "src" / "openai_codex").glob("bin/**"))
def test_stage_sdk_release_replaces_existing_staging_dir(tmp_path: Path) -> None:
@@ -595,7 +621,7 @@ def test_stage_runtime_stages_binary_without_type_generation(tmp_path: Path) ->
def test_default_runtime_is_resolved_from_installed_runtime_package(
tmp_path: Path,
) -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
fake_binary = tmp_path / ("codex.exe" if client_module.os.name == "nt" else "codex")
fake_binary.write_text("")
@@ -610,7 +636,7 @@ def test_default_runtime_is_resolved_from_installed_runtime_package(
def test_explicit_codex_bin_override_takes_priority(tmp_path: Path) -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
explicit_binary = tmp_path / (
"custom-codex.exe" if client_module.os.name == "nt" else "custom-codex"
@@ -628,7 +654,7 @@ def test_explicit_codex_bin_override_takes_priority(tmp_path: Path) -> None:
def test_missing_runtime_package_requires_explicit_codex_bin() -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
ops = client_module.CodexBinResolverOps(
installed_codex_path=lambda: (_ for _ in ()).throw(
@@ -642,7 +668,7 @@ def test_missing_runtime_package_requires_explicit_codex_bin() -> None:
def test_broken_runtime_package_does_not_fall_back() -> None:
from codex_app_server import client as client_module
from openai_codex import client as client_module
ops = client_module.CodexBinResolverOps(
installed_codex_path=lambda: (_ for _ in ()).throw(

View File

@@ -4,21 +4,24 @@ import asyncio
import time
from types import SimpleNamespace
from codex_app_server.async_client import AsyncAppServerClient
from codex_app_server.generated.v2_all import (
from openai_codex.async_client import AsyncAppServerClient
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
TurnCompletedNotification,
)
from codex_app_server.models import Notification, UnknownNotification
from openai_codex.models import Notification, UnknownNotification
def test_async_client_allows_concurrent_transport_calls() -> None:
"""Async wrappers should offload sync calls so concurrent awaits can overlap."""
async def scenario() -> int:
"""Run two blocking sync calls and report peak overlap."""
client = AsyncAppServerClient()
active = 0
max_active = 0
def fake_model_list(include_hidden: bool = False) -> bool:
"""Simulate a blocking sync transport call."""
nonlocal active, max_active
active += 1
max_active = max(max_active, active)
@@ -34,16 +37,20 @@ def test_async_client_allows_concurrent_transport_calls() -> None:
def test_async_stream_text_is_incremental_without_blocking_parallel_calls() -> None:
"""Async text streaming should yield incrementally without blocking other calls."""
async def scenario() -> tuple[str, list[str], bool]:
"""Start a stream, then prove another async client call can finish."""
client = AsyncAppServerClient()
def fake_stream_text(thread_id: str, text: str, params=None): # type: ignore[no-untyped-def]
"""Yield one item before sleeping so the async wrapper can interleave."""
yield "first"
time.sleep(0.03)
yield "second"
yield "third"
def fake_model_list(include_hidden: bool = False) -> str:
"""Return immediately to prove the event loop was not monopolized."""
return "done"
client._sync.stream_text = fake_stream_text # type: ignore[method-assign]
@@ -70,7 +77,9 @@ def test_async_stream_text_is_incremental_without_blocking_parallel_calls() -> N
def test_async_client_turn_notification_methods_delegate_to_sync_client() -> None:
"""Async turn routing methods should preserve sync-client registration semantics."""
async def scenario() -> tuple[list[tuple[str, str]], Notification, str]:
"""Record the sync-client calls made by async turn notification wrappers."""
client = AsyncAppServerClient()
event = Notification(
method="unknown/direct",
@@ -85,16 +94,20 @@ def test_async_client_turn_notification_methods_delegate_to_sync_client() -> Non
calls: list[tuple[str, str]] = []
def fake_register(turn_id: str) -> None:
"""Record turn registration through the wrapped sync client."""
calls.append(("register", turn_id))
def fake_unregister(turn_id: str) -> None:
"""Record turn unregistration through the wrapped sync client."""
calls.append(("unregister", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return one routed notification through the wrapped sync client."""
calls.append(("next", turn_id))
return event
def fake_wait(turn_id: str) -> TurnCompletedNotification:
"""Return one completion through the wrapped sync client."""
calls.append(("wait", turn_id))
return completed
@@ -132,7 +145,9 @@ def test_async_client_turn_notification_methods_delegate_to_sync_client() -> Non
def test_async_stream_text_uses_sync_turn_routing() -> None:
"""Async text streaming should consume the same per-turn routing path as sync."""
async def scenario() -> tuple[list[tuple[str, str]], list[str]]:
"""Record routing calls while streaming two deltas and one completion."""
client = AsyncAppServerClient()
notifications = [
Notification(
@@ -170,17 +185,21 @@ def test_async_stream_text_uses_sync_turn_routing() -> None:
calls: list[tuple[str, str]] = []
def fake_turn_start(thread_id: str, text: str, *, params=None): # type: ignore[no-untyped-def]
"""Return a started turn id while recording the request thread."""
calls.append(("turn_start", thread_id))
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
def fake_register(turn_id: str) -> None:
"""Record stream registration for the started turn."""
calls.append(("register", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return the next queued turn notification."""
calls.append(("next", turn_id))
return notifications.pop(0)
def fake_unregister(turn_id: str) -> None:
"""Record stream cleanup for the started turn."""
calls.append(("unregister", turn_id))
client._sync.turn_start = fake_turn_start # type: ignore[method-assign]

View File

@@ -3,9 +3,9 @@ from __future__ import annotations
from pathlib import Path
from typing import Any
from codex_app_server.client import AppServerClient, _params_dict
from codex_app_server.generated.notification_registry import notification_turn_id
from codex_app_server.generated.v2_all import (
from openai_codex.client import AppServerClient, _params_dict
from openai_codex.generated.notification_registry import notification_turn_id
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
ApprovalsReviewer,
ThreadListParams,
@@ -14,7 +14,7 @@ from codex_app_server.generated.v2_all import (
TurnCompletedNotification,
WarningNotification,
)
from codex_app_server.models import Notification, UnknownNotification
from openai_codex.models import Notification, UnknownNotification
ROOT = Path(__file__).resolve().parents[1]
@@ -45,11 +45,12 @@ def test_generated_params_models_are_snake_case_and_dump_by_alias() -> None:
def test_generated_v2_bundle_has_single_shared_plan_type_definition() -> None:
source = (ROOT / "src" / "codex_app_server" / "generated" / "v2_all.py").read_text()
source = (ROOT / "src" / "openai_codex" / "generated" / "v2_all.py").read_text()
assert source.count("class PlanType(") == 1
def test_thread_resume_response_accepts_auto_review_reviewer() -> None:
"""Generated response models should keep accepting the auto review enum value."""
response = ThreadResumeResponse.model_validate(
{
"approvalPolicy": "on-request",
@@ -66,6 +67,8 @@ def test_thread_resume_response_accepts_auto_review_reviewer() -> None:
"id": "thread-1",
"modelProvider": "openai",
"preview": "",
# The pinned runtime schema requires the session id on threads.
"sessionId": "session-1",
"source": "cli",
"status": {"type": "idle"},
"turns": [],
@@ -135,6 +138,7 @@ def test_invalid_notification_payload_falls_back_to_unknown() -> None:
def test_generated_notification_turn_id_handles_known_payload_shapes() -> None:
"""Generated routing metadata should cover direct, nested, and unscoped payloads."""
direct = AgentMessageDeltaNotification.model_validate(
{
"delta": "hello",
@@ -159,6 +163,7 @@ def test_generated_notification_turn_id_handles_known_payload_shapes() -> None:
def test_turn_notification_router_demuxes_registered_turns() -> None:
"""The router should deliver out-of-order turn events to the matching queues."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")
@@ -201,6 +206,7 @@ def test_turn_notification_router_demuxes_registered_turns() -> None:
def test_client_reader_routes_interleaved_turn_notifications_by_turn_id() -> None:
"""Reader-loop routing should preserve order within each interleaved turn stream."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")
@@ -245,6 +251,7 @@ def test_client_reader_routes_interleaved_turn_notifications_by_turn_id() -> Non
]
def fake_read_message() -> dict[str, object]:
"""Feed the reader loop a realistic interleaved stdout sequence."""
if messages:
return messages.pop(0)
raise EOFError
@@ -278,6 +285,7 @@ def test_client_reader_routes_interleaved_turn_notifications_by_turn_id() -> Non
def test_turn_notification_router_buffers_events_before_registration() -> None:
"""Early turn events should be replayed once their TurnHandle registers."""
client = AppServerClient()
client._router.route_notification(
client._coerce_notification(
@@ -302,6 +310,7 @@ def test_turn_notification_router_buffers_events_before_registration() -> None:
def test_turn_notification_router_clears_unregistered_turn_when_completed() -> None:
"""A completed unregistered turn should not leave a pending queue behind."""
client = AppServerClient()
client._router.route_notification(
client._coerce_notification(
@@ -328,6 +337,7 @@ def test_turn_notification_router_clears_unregistered_turn_when_completed() -> N
def test_turn_notification_router_routes_unknown_turn_notifications() -> None:
"""Unknown notifications should still route when their raw params carry a turn id."""
client = AppServerClient()
client.register_turn_notifications("turn-1")
client.register_turn_notifications("turn-2")

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import importlib.metadata
import os
import subprocess
import sys
@@ -7,13 +8,14 @@ from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
GENERATED_TARGETS = [
Path("src/codex_app_server/generated/notification_registry.py"),
Path("src/codex_app_server/generated/v2_all.py"),
Path("src/codex_app_server/api.py"),
Path("src/openai_codex/generated/notification_registry.py"),
Path("src/openai_codex/generated/v2_all.py"),
Path("src/openai_codex/api.py"),
]
def _snapshot_target(root: Path, rel_path: Path) -> dict[str, bytes] | bytes | None:
"""Capture one generated artifact so regeneration drift is easy to compare."""
target = root / rel_path
if not target.exists():
return None
@@ -28,16 +30,22 @@ def _snapshot_target(root: Path, rel_path: Path) -> dict[str, bytes] | bytes | N
def _snapshot_targets(root: Path) -> dict[str, dict[str, bytes] | bytes | None]:
"""Capture all checked-in generated artifacts before and after regeneration."""
return {
str(rel_path): _snapshot_target(root, rel_path) for rel_path in GENERATED_TARGETS
str(rel_path): _snapshot_target(root, rel_path)
for rel_path in GENERATED_TARGETS
}
def test_generated_files_are_up_to_date():
"""Regenerating from the pinned runtime package should leave artifacts unchanged."""
before = _snapshot_targets(ROOT)
# Regenerate contract artifacts via single maintenance entrypoint.
# Regenerate contract artifacts via the pinned runtime package, not a local
# app-server binary from the checkout or CI environment.
assert importlib.metadata.version("openai-codex-cli-bin") == "0.131.0a4"
env = os.environ.copy()
env.pop("CODEX_EXEC_PATH", None)
python_bin = str(Path(sys.executable).parent)
env["PATH"] = f"{python_bin}{os.pathsep}{env.get('PATH', '')}"

View File

@@ -4,12 +4,13 @@ import asyncio
from collections import deque
from pathlib import Path
from types import SimpleNamespace
from typing import Any
import pytest
import codex_app_server.api as public_api_module
from codex_app_server.client import AppServerClient
from codex_app_server.generated.v2_all import (
import openai_codex.api as public_api_module
from openai_codex.client import AppServerClient
from openai_codex.generated.v2_all import (
AgentMessageDeltaNotification,
ItemCompletedNotification,
MessagePhase,
@@ -17,20 +18,30 @@ from codex_app_server.generated.v2_all import (
TurnCompletedNotification,
TurnStatus,
)
from codex_app_server.models import InitializeResponse, Notification
from codex_app_server.api import (
from openai_codex.models import InitializeResponse, Notification
from openai_codex.api import (
AsyncCodex,
AsyncThread,
AsyncTurnHandle,
Codex,
RunResult,
TextInput,
Thread,
TurnHandle,
)
from openai_codex.types import AskForApproval
ROOT = Path(__file__).resolve().parents[1]
def _approval_policy_values(params: list[Any]) -> list[object]:
"""Return serialized approval policies from captured Pydantic params."""
return [
param.model_dump(by_alias=True, mode="json").get("approvalPolicy")
for param in params
]
def _delta_notification(
*,
thread_id: str = "thread-1",
@@ -82,6 +93,7 @@ def _item_completed_notification(
text: str = "final text",
phase: MessagePhase | None = None,
) -> Notification:
"""Build a realistic completed-item notification accepted by generated models."""
item: dict[str, object] = {
"id": "item-1",
"text": text,
@@ -93,6 +105,8 @@ def _item_completed_notification(
method="item/completed",
payload=ItemCompletedNotification.model_validate(
{
# The pinned runtime schema requires completion timestamps.
"completedAtMs": 1,
"item": item,
"threadId": thread_id,
"turnId": turn_id,
@@ -226,7 +240,132 @@ def test_async_codex_initializes_only_once_under_concurrency() -> None:
asyncio.run(scenario())
def test_ask_for_approval_exposes_simple_policy_constants() -> None:
"""AskForApproval should expose enum-like aliases for simple policies."""
assert {
"untrusted": AskForApproval.untrusted.model_dump(mode="json"),
"on_failure": AskForApproval.on_failure.model_dump(mode="json"),
"on_request": AskForApproval.on_request.model_dump(mode="json"),
"never": AskForApproval.never.model_dump(mode="json"),
} == {
"untrusted": "untrusted",
"on_failure": "on-failure",
"on_request": "on-request",
"never": "never",
}
def test_sync_api_forces_approval_policy_never_for_started_work() -> None:
"""Sync start methods should send never until approval handling exists."""
captured: list[Any] = []
class FakeClient:
def thread_start(self, params: object) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-started"))
def thread_resume(
self,
_thread_id: str,
params: object,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-resumed"))
def thread_fork(
self,
_thread_id: str,
params: object,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-forked"))
def turn_start(
self,
_thread_id: str,
_wire_input: object,
*,
params: object | None = None,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
client = FakeClient()
codex = object.__new__(Codex)
codex._client = client
codex.thread_start(approval_policy=AskForApproval.on_request)
codex.thread_resume("thread-1", approval_policy=AskForApproval.on_request)
codex.thread_fork("thread-1", approval_policy=AskForApproval.on_request)
Thread(client, "thread-1").turn(
TextInput("hello"),
approval_policy=AskForApproval.on_request,
)
assert _approval_policy_values(captured) == ["never", "never", "never", "never"]
def test_async_api_forces_approval_policy_never_for_started_work() -> None:
"""Async start methods should send never until approval handling exists."""
async def scenario() -> None:
"""Exercise the async wrappers without spawning a real app server."""
captured: list[Any] = []
class FakeAsyncClient:
async def thread_start(self, params: object) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-started"))
async def thread_resume(
self,
_thread_id: str,
params: object,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-resumed"))
async def thread_fork(
self,
_thread_id: str,
params: object,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(thread=SimpleNamespace(id="thread-forked"))
async def turn_start(
self,
_thread_id: str,
_wire_input: object,
*,
params: object | None = None,
) -> SimpleNamespace:
captured.append(params)
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
codex = AsyncCodex()
codex._client = FakeAsyncClient()
codex._initialized = True
await codex.thread_start(approval_policy=AskForApproval.on_request)
await codex.thread_resume("thread-1", approval_policy=AskForApproval.on_request)
await codex.thread_fork("thread-1", approval_policy=AskForApproval.on_request)
await AsyncThread(codex, "thread-1").turn(
TextInput("hello"),
approval_policy=AskForApproval.on_request,
)
assert _approval_policy_values(captured) == [
"never",
"never",
"never",
"never",
]
asyncio.run(scenario())
def test_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two sync TurnHandle streams should advance independently on one client."""
client = AppServerClient()
notifications: dict[str, deque[Notification]] = {
"turn-1": deque(
@@ -257,10 +396,13 @@ def test_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
def test_async_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
"""Two async TurnHandle streams should advance independently on one client."""
async def scenario() -> None:
"""Interleave two async streams backed by separate per-turn queues."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this stream test."""
return None
notifications: dict[str, deque[Notification]] = {
@@ -279,6 +421,7 @@ def test_async_turn_streams_can_consume_multiple_turns_on_one_client() -> None:
}
async def fake_next_notification(turn_id: str) -> Notification:
"""Return the next notification from the requested per-turn queue."""
return notifications[turn_id].popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
@@ -468,6 +611,7 @@ def test_thread_run_raises_on_failed_turn() -> None:
def test_stream_text_registers_and_consumes_turn_notifications() -> None:
"""stream_text should register, consume, and unregister one turn queue."""
client = AppServerClient()
notifications: deque[Notification] = deque(
[
@@ -482,13 +626,16 @@ def test_stream_text_registers_and_consumes_turn_notifications() -> None:
)
def fake_register(turn_id: str) -> None:
"""Record registration for the turn created by stream_text."""
calls.append(("register", turn_id))
def fake_next(turn_id: str) -> Notification:
"""Return the next queued notification for stream_text."""
calls.append(("next", turn_id))
return notifications.popleft()
def fake_unregister(turn_id: str) -> None:
"""Record cleanup for the turn created by stream_text."""
calls.append(("unregister", turn_id))
client.register_turn_notifications = fake_register # type: ignore[method-assign]
@@ -510,10 +657,13 @@ def test_stream_text_registers_and_consumes_turn_notifications() -> None:
def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
"""Async Thread.run should normalize string input and collect routed results."""
async def scenario() -> None:
"""Feed item, usage, and completion events through the async turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
item_notification = _item_completed_notification(text="Hello async.")
@@ -528,12 +678,14 @@ def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
seen: dict[str, object] = {}
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202
"""Capture normalized input and return a synthetic turn id."""
seen["thread_id"] = thread_id
seen["wire_input"] = wire_input
seen["params"] = params
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued notification for the synthetic turn."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
@@ -556,10 +708,13 @@ def test_async_thread_run_accepts_string_input_and_returns_run_result() -> None:
def test_async_thread_run_uses_last_completed_assistant_message_as_final_response() -> (
None
):
"""Async run should use the last final assistant message as the response text."""
async def scenario() -> None:
"""Feed two completed agent messages through the async per-turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
first_item_notification = _item_completed_notification(
@@ -577,9 +732,11 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
)
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202,ARG001
"""Return a synthetic turn id after AsyncThread.run builds input."""
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued notification for that synthetic turn."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]
@@ -598,10 +755,13 @@ def test_async_thread_run_uses_last_completed_assistant_message_as_final_respons
def test_async_thread_run_returns_none_when_only_commentary_messages_complete() -> None:
"""Async Thread.run should ignore commentary-only messages for final text."""
async def scenario() -> None:
"""Feed a commentary item and completion through the async turn stream."""
codex = AsyncCodex()
async def fake_ensure_initialized() -> None:
"""Avoid starting a real app-server process for this run test."""
return None
commentary_notification = _item_completed_notification(
@@ -616,9 +776,11 @@ def test_async_thread_run_returns_none_when_only_commentary_messages_complete()
)
async def fake_turn_start(thread_id: str, wire_input: object, *, params=None): # noqa: ANN001,ANN202,ARG001
"""Return a synthetic turn id for commentary-only output."""
return SimpleNamespace(turn=SimpleNamespace(id="turn-1"))
async def fake_next_notification(_turn_id: str) -> Notification:
"""Return the next queued commentary/completion notification."""
return notifications.popleft()
codex._ensure_initialized = fake_ensure_initialized # type: ignore[method-assign]

View File

@@ -6,13 +6,87 @@ import tomllib
from pathlib import Path
from typing import Any
import codex_app_server
from codex_app_server import AppServerConfig, RunResult
from codex_app_server.models import InitializeResponse
from codex_app_server.api import AsyncCodex, AsyncThread, Codex, Thread
import openai_codex
import openai_codex.types as public_types
from openai_codex import (
AppServerConfig,
AsyncCodex,
AsyncThread,
Codex,
RunResult,
Thread,
)
from openai_codex.types import InitializeResponse
EXPECTED_ROOT_EXPORTS = [
"__version__",
"AppServerConfig",
"Codex",
"AsyncCodex",
"Thread",
"AsyncThread",
"TurnHandle",
"AsyncTurnHandle",
"RunResult",
"Input",
"InputItem",
"TextInput",
"ImageInput",
"LocalImageInput",
"SkillInput",
"MentionInput",
"retry_on_overload",
"AppServerError",
"TransportClosedError",
"JsonRpcError",
"AppServerRpcError",
"ParseError",
"InvalidRequestError",
"MethodNotFoundError",
"InvalidParamsError",
"InternalRpcError",
"ServerBusyError",
"RetryLimitExceededError",
"is_retryable_error",
]
EXPECTED_TYPES_EXPORTS = [
"ApprovalsReviewer",
"AskForApproval",
"InitializeResponse",
"JsonObject",
"ModelListResponse",
"Notification",
"Personality",
"PlanType",
"ReasoningEffort",
"ReasoningSummary",
"SandboxMode",
"SandboxPolicy",
"SortDirection",
"ThreadArchiveResponse",
"ThreadCompactStartResponse",
"ThreadItem",
"ThreadListCwdFilter",
"ThreadListResponse",
"ThreadReadResponse",
"ThreadSetNameResponse",
"ThreadSortKey",
"ThreadSource",
"ThreadSourceKind",
"ThreadStartSource",
"ThreadTokenUsage",
"ThreadTokenUsageUpdatedNotification",
"Turn",
"TurnCompletedNotification",
"TurnInterruptResponse",
"TurnStatus",
"TurnSteerResponse",
]
def _keyword_only_names(fn: object) -> list[str]:
"""Return only user-facing keyword-only parameter names for a public method."""
signature = inspect.signature(fn)
return [
param.name
@@ -22,6 +96,7 @@ def _keyword_only_names(fn: object) -> list[str]:
def _assert_no_any_annotations(fn: object) -> None:
"""Reject loose annotations on public wrapper methods."""
signature = inspect.signature(fn)
for param in signature.parameters.values():
if param.annotation is Any:
@@ -33,27 +108,106 @@ def _assert_no_any_annotations(fn: object) -> None:
def test_root_exports_app_server_config() -> None:
"""The root package should expose the process configuration object."""
assert AppServerConfig.__name__ == "AppServerConfig"
def test_root_exports_run_result() -> None:
"""The root package should expose the common-case run result wrapper."""
assert RunResult.__name__ == "RunResult"
def test_package_and_default_client_versions_follow_project_version() -> None:
"""The importable package version should stay aligned with pyproject metadata."""
pyproject_path = Path(__file__).resolve().parents[1] / "pyproject.toml"
pyproject = tomllib.loads(pyproject_path.read_text())
assert codex_app_server.__version__ == pyproject["project"]["version"]
assert AppServerConfig().client_version == codex_app_server.__version__
assert openai_codex.__version__ == pyproject["project"]["version"]
assert AppServerConfig().client_version == openai_codex.__version__
def test_package_includes_py_typed_marker() -> None:
marker = resources.files("codex_app_server").joinpath("py.typed")
"""The wheel should advertise that inline type information is available."""
marker = resources.files("openai_codex").joinpath("py.typed")
assert marker.is_file()
def test_package_root_exports_only_public_api() -> None:
"""The package root should expose the supported SDK surface, not internals."""
assert openai_codex.__all__ == EXPECTED_ROOT_EXPORTS
assert {
name: hasattr(openai_codex, name) for name in EXPECTED_ROOT_EXPORTS
} == {name: True for name in EXPECTED_ROOT_EXPORTS}
assert {
"AppServerClient": hasattr(openai_codex, "AppServerClient"),
"AsyncAppServerClient": hasattr(openai_codex, "AsyncAppServerClient"),
"InitializeResponse": hasattr(openai_codex, "InitializeResponse"),
"ThreadStartParams": hasattr(openai_codex, "ThreadStartParams"),
"TurnStartParams": hasattr(openai_codex, "TurnStartParams"),
"TurnCompletedNotification": hasattr(
openai_codex, "TurnCompletedNotification"
),
"TurnStatus": hasattr(openai_codex, "TurnStatus"),
} == {
"AppServerClient": False,
"AsyncAppServerClient": False,
"InitializeResponse": False,
"ThreadStartParams": False,
"TurnStartParams": False,
"TurnCompletedNotification": False,
"TurnStatus": False,
}
def test_package_star_import_matches_public_api() -> None:
"""Star imports should follow the same explicit public API list."""
namespace: dict[str, object] = {}
exec("from openai_codex import *", namespace)
exported = set(namespace) - {"__builtins__"}
assert exported == set(EXPECTED_ROOT_EXPORTS)
def test_types_module_exports_curated_public_types() -> None:
"""The public type module should be the supported place for app-server models."""
assert public_types.__all__ == EXPECTED_TYPES_EXPORTS
assert {name: hasattr(public_types, name) for name in EXPECTED_TYPES_EXPORTS} == {
name: True for name in EXPECTED_TYPES_EXPORTS
}
def test_types_star_import_matches_public_types() -> None:
"""Star imports from the type module should match its explicit export list."""
namespace: dict[str, object] = {}
exec("from openai_codex.types import *", namespace)
exported = set(namespace) - {"__builtins__"}
assert exported == set(EXPECTED_TYPES_EXPORTS)
def test_examples_use_public_import_surfaces() -> None:
"""Examples should teach users the public root and type-module imports only."""
examples_root = Path(__file__).resolve().parents[1] / "examples"
private_import_markers = [
"openai_codex.api",
"openai_codex.client",
"openai_codex.generated",
"openai_codex.models",
"openai_codex.retry",
]
offenders = {
str(path.relative_to(examples_root)): marker
for path in examples_root.rglob("*.py")
for marker in private_import_markers
if marker in path.read_text()
}
assert offenders == {}
def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"""Generated convenience methods should expose typed Pythonic keyword names."""
expected = {
Codex.thread_start: [
"approval_policy",
@@ -70,6 +224,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_name",
"service_tier",
"session_start_source",
"thread_source",
],
Codex.thread_list: [
"archived",
@@ -108,6 +263,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"model_provider",
"sandbox",
"service_tier",
"thread_source",
],
Thread.turn: [
"approval_policy",
@@ -148,6 +304,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"service_name",
"service_tier",
"session_start_source",
"thread_source",
],
AsyncCodex.thread_list: [
"archived",
@@ -186,6 +343,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
"model_provider",
"sandbox",
"service_tier",
"thread_source",
],
AsyncThread.turn: [
"approval_policy",
@@ -223,6 +381,7 @@ def test_generated_public_signatures_are_snake_case_and_typed() -> None:
def test_lifecycle_methods_are_codex_scoped() -> None:
"""Lifecycle operations should hang off the client rather than thread objects."""
assert hasattr(Codex, "thread_resume")
assert hasattr(Codex, "thread_fork")
assert hasattr(Codex, "thread_archive")
@@ -253,6 +412,7 @@ def test_lifecycle_methods_are_codex_scoped() -> None:
def test_initialize_metadata_parses_user_agent_shape() -> None:
"""Initialize metadata should accept the legacy user-agent-only payload shape."""
payload = InitializeResponse.model_validate({"userAgent": "codex-cli/1.2.3"})
parsed = Codex._validate_initialize(payload)
assert parsed is payload
@@ -263,6 +423,7 @@ def test_initialize_metadata_parses_user_agent_shape() -> None:
def test_initialize_metadata_requires_non_empty_information() -> None:
"""Initialize metadata should fail when the runtime gives no identity signal."""
try:
Codex._validate_initialize(InitializeResponse.model_validate({}))
except RuntimeError as exc:

View File

@@ -205,7 +205,7 @@ def test_real_initialize_and_model_list(runtime_env: PreparedRuntimeEnv) -> None
textwrap.dedent(
"""
import json
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
models = codex.models(include_hidden=True)
@@ -234,7 +234,7 @@ def test_real_thread_and_turn_start_smoke(runtime_env: PreparedRuntimeEnv) -> No
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(
@@ -271,7 +271,7 @@ def test_real_thread_run_convenience_smoke(runtime_env: PreparedRuntimeEnv) -> N
textwrap.dedent(
"""
import json
from codex_app_server import Codex
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start(
@@ -304,7 +304,7 @@ def test_real_async_thread_turn_usage_and_ids_smoke(
"""
import asyncio
import json
from codex_app_server import AsyncCodex, TextInput
from openai_codex import AsyncCodex, TextInput
async def main():
async with AsyncCodex() as codex:
@@ -347,7 +347,7 @@ def test_real_async_thread_run_convenience_smoke(
"""
import asyncio
import json
from codex_app_server import AsyncCodex
from openai_codex import AsyncCodex
async def main():
async with AsyncCodex() as codex:
@@ -436,7 +436,7 @@ def test_real_streaming_smoke_turn_completed(runtime_env: PreparedRuntimeEnv) ->
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(
@@ -469,7 +469,7 @@ def test_real_turn_interrupt_smoke(runtime_env: PreparedRuntimeEnv) -> None:
textwrap.dedent(
"""
import json
from codex_app_server import Codex, TextInput
from openai_codex import Codex, TextInput
with Codex() as codex:
thread = codex.thread_start(

24
sdk/python/uv.lock generated
View File

@@ -3,9 +3,12 @@ revision = 3
requires-python = ">=3.10"
[options]
exclude-newer = "2026-04-20T18:19:27.620299Z"
exclude-newer = "2026-05-02T06:28:46.47929Z"
exclude-newer-span = "P7D"
[options.exclude-newer-package]
openai-codex-cli-bin = "2026-05-10T00:00:00Z"
[[package]]
name = "annotated-types"
version = "0.7.0"
@@ -278,10 +281,11 @@ wheels = [
]
[[package]]
name = "openai-codex-app-server-sdk"
version = "0.116.0a1"
name = "openai-codex"
version = "0.131.0a4"
source = { editable = "." }
dependencies = [
{ name = "openai-codex-cli-bin" },
{ name = "pydantic" },
]
@@ -295,12 +299,26 @@ dev = [
[package.metadata]
requires-dist = [
{ name = "datamodel-code-generator", marker = "extra == 'dev'", specifier = "==0.31.2" },
{ name = "openai-codex-cli-bin", specifier = "==0.131.0a4" },
{ name = "pydantic", specifier = ">=2.12" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0" },
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.11" },
]
provides-extras = ["dev"]
[[package]]
name = "openai-codex-cli-bin"
version = "0.131.0a4"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/6b/9f/f9fc4bb1b2b7a20d4d65143ebb4c4dcd2301a718183b539ecb5b1c0ac3ec/openai_codex_cli_bin-0.131.0a4-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:db0f3cb7dda310641ac04fbaf3f128693a3817ab83ae59b67a3c9c74bd53f8b8", size = 88367585, upload-time = "2026-05-09T06:14:09.453Z" },
{ url = "https://files.pythonhosted.org/packages/dc/39/eb95ed0e8156669e895a192dec760be07dabe891c3c6340f7c6487b9a976/openai_codex_cli_bin-0.131.0a4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6cae5af6edca7f6d3f0bcbbd93cfc8a6dc3e33fb5955af21ae492b6d5d0dcb72", size = 79245567, upload-time = "2026-05-09T06:14:13.581Z" },
{ url = "https://files.pythonhosted.org/packages/0c/92/ade176fa78d746d5ff7a6e371d64740c0d95ab299b0dd58a5404b89b3915/openai_codex_cli_bin-0.131.0a4-py3-none-musllinux_1_1_aarch64.whl", hash = "sha256:5728f9887baf62d7e72f4f242093b3ff81e26c81d80d346fe1eef7eda6838aa8", size = 77758628, upload-time = "2026-05-09T06:14:18.374Z" },
{ url = "https://files.pythonhosted.org/packages/28/e6/bfe6c65f8e3e5499f71b24c3b6e8d07e4d426543d25e429b9b141b544e5f/openai_codex_cli_bin-0.131.0a4-py3-none-musllinux_1_1_x86_64.whl", hash = "sha256:d7a47fd3667fbcc216593839c202deffa056e9b3d46c6933e72594d461f4fea0", size = 84535509, upload-time = "2026-05-09T06:14:22.851Z" },
{ url = "https://files.pythonhosted.org/packages/bd/b7/53dc094a691ab6f2ca079e8e865b122843809ac4fad51cac4d59021e599d/openai_codex_cli_bin-0.131.0a4-py3-none-win_amd64.whl", hash = "sha256:c61bcf029672494c4c7fdc8567dbaa659a48bb75641d91c2ade27c1e46803434", size = 88185543, upload-time = "2026-05-09T06:14:27.282Z" },
{ url = "https://files.pythonhosted.org/packages/82/99/e0852ffcf9b4d2794fef83e0c3a267b3c773a776f136e9f7ce19f0c8df42/openai_codex_cli_bin-0.131.0a4-py3-none-win_arm64.whl", hash = "sha256:bbde750186861f102e346ac066f4e9608f515f7b71b16a6e8b7ef1ddc02a97a5", size = 81196380, upload-time = "2026-05-09T06:14:32.103Z" },
]
[[package]]
name = "packaging"
version = "26.1"