Compare commits

...

2 Commits

Author SHA1 Message Date
Eric Traut
de9c5c0226 Fix Windows doctor npm root probe (#22967)
## Why
On Windows npm-managed installs expose the working shim as `npm.cmd`.
`codex doctor` probed bare `npm`, which could incorrectly report that
npm global-root inspection was unavailable even when the install was
healthy.

Fixes #22964.

## What changed
- Use `npm.cmd` for the doctor npm-root probe on Windows.
- Keep the existing `npm` probe on non-Windows platforms.
2026-05-16 00:39:27 -07:00
Ahmed Ibrahim
326e31ab65 [codex] Refine Python SDK user-facing docs (#22941)
## Summary
- Remove maintainer and release-process wording from the Python SDK
README and docs.
- Rewrite SDK-facing comments/docstrings so they read as standalone
product documentation.
- Add a real app-server integration smoke that follows the public
quickstart-style `Codex() -> thread_start() -> run()` path.

## Integration coverage
- Add `test_real_quickstart_style_flow_smoke` in the real app-server
integration suite.

## Validation
- Local tests were not run per repo guidance. CI should validate this
branch once the PR is online.
2026-05-15 19:55:05 -07:00
7 changed files with 43 additions and 69 deletions

View File

@@ -105,6 +105,10 @@ const COLOR_ENV_VARS: &[&str] = &[
const TERMINAL_DIMENSION_ENV_VARS: &[&str] = &["COLUMNS", "LINES"];
const TERMINFO_ENV_VARS: &[&str] = &["TERMINFO", "TERMINFO_DIRS"];
const LOCALE_ENV_VARS: &[&str] = &["LC_ALL", "LC_CTYPE", "LANG"];
#[cfg(windows)]
const NPM_COMMAND: &str = "npm.cmd";
#[cfg(not(windows))]
const NPM_COMMAND: &str = "npm";
const REMOTE_TERMINAL_ENV_VARS: &[&str] = &[
"SSH_TTY",
"SSH_CONNECTION",
@@ -884,7 +888,7 @@ fn npm_global_root_check() -> NpmRootCheck {
return NpmRootCheck::MissingPackageRoot;
};
let output = match run_command("npm", ["root", "-g"]) {
let output = match run_command(NPM_COMMAND, ["root", "-g"]) {
Ok(output) => output,
Err(err) => return NpmRootCheck::NpmUnavailable(err),
};

View File

@@ -17,10 +17,8 @@ source .venv/bin/activate
```
Published SDK builds pin an exact `openai-codex-cli-bin` runtime dependency
with the same version as the SDK. For local repo development, either pass
`AppServerConfig(codex_bin=...)` to point at a local build explicitly, or use
the repo examples/notebook bootstrap which installs the pinned runtime package
automatically.
with the same version as the SDK. Pass `AppServerConfig(codex_bin=...)` only
when you intentionally want to run against a specific local app-server binary.
## Quickstart
@@ -55,49 +53,12 @@ python examples/01_quickstart_constructor/sync.py
python examples/01_quickstart_constructor/async.py
```
## Runtime packaging
The repo no longer checks `codex` binaries into `sdk/python`.
## Runtime
Published SDK builds are pinned to an exact `openai-codex-cli-bin` package
version, and that runtime package carries the platform-specific binary for the
target wheel. The SDK package version and runtime package version must match.
For local repo development, the checked-in `sdk/python-runtime` package is only
a template for staged release artifacts. Editable installs should use an
explicit `codex_bin` override for manual SDK usage; the repo examples and
notebook bootstrap the pinned runtime package automatically.
## Maintainer workflow
```bash
cd sdk/python
uv sync
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
/tmp/codex-python-release/openai-codex-cli-bin \
/path/to/codex \
--codex-version <codex-release-tag-or-pep440-version>
```
Pass `--platform-tag ...` to `stage-runtime` when the wheel should be tagged for
a Rust target that differs from the Python build host. The intended one-off
matrix is `macosx_11_0_arm64`, `macosx_10_9_x86_64`,
`musllinux_1_1_aarch64`, `musllinux_1_1_x86_64`, `win_arm64`, and
`win_amd64`.
This supports the CI release flow:
- run `generate-types` before packaging
- stage `openai-codex` once with an exact `openai-codex-cli-bin==...` dependency
- stage `openai-codex-cli-bin` on each supported platform runner with the same pinned runtime version
- build and publish `openai-codex-cli-bin` as platform wheels only through PyPI trusted publishing; do not publish an sdist
## Compatibility and versioning
- Package: `openai-codex`

View File

@@ -59,29 +59,6 @@ Common causes:
- local auth/session is missing
- incompatible/old app-server
Maintainers stage releases by building the SDK once and the runtime once per
platform with the same pinned runtime version. Publish `openai-codex-cli-bin`
as platform wheels only; do not publish an sdist:
```bash
cd sdk/python
python scripts/update_sdk_artifacts.py generate-types
python scripts/update_sdk_artifacts.py \
stage-sdk \
/tmp/codex-python-release/openai-codex \
--codex-version <codex-release-tag-or-pep440-version>
python scripts/update_sdk_artifacts.py \
stage-runtime \
/tmp/codex-python-release/openai-codex-cli-bin \
/path/to/codex \
--codex-version <codex-release-tag-or-pep440-version>
```
If you are packaging a binary for a different target than the Python build
host, pass `--platform-tag ...` to `stage-runtime`. The intended one-off matrix
is `macosx_11_0_arm64`, `macosx_10_9_x86_64`, `musllinux_1_1_aarch64`,
`musllinux_1_1_x86_64`, `win_arm64`, and `win_amd64`.
## Why does a turn "hang"?
A turn is complete only when `turn/completed` arrives for that turn ID.

View File

@@ -2,7 +2,7 @@
This is the fastest path from install to a multi-turn thread using the public SDK surface.
The SDK is experimental. Treat the API, bundled runtime strategy, and packaging details as unstable until the first public release.
The SDK is experimental, so the public API and runtime requirements may keep evolving before the first public release.
## 1) Install

View File

@@ -113,7 +113,7 @@ def _approval_mode_override_settings(
class Codex:
"""Minimal typed SDK surface for app-server v2."""
"""Typed Python client for app-server v2 workflows."""
def __init__(self, config: AppServerConfig | None = None) -> None:
self._client = AppServerClient(config=config)

View File

@@ -1,4 +1,4 @@
"""Public generated app-server model exports for type annotations and matching."""
"""Public app-server model exports for type annotations and matching."""
from __future__ import annotations

View File

@@ -295,6 +295,38 @@ def test_real_thread_run_convenience_smoke(runtime_env: PreparedRuntimeEnv) -> N
assert isinstance(data["has_usage"], bool)
def test_real_quickstart_style_flow_smoke(runtime_env: PreparedRuntimeEnv) -> None:
data = _run_json_python(
runtime_env,
textwrap.dedent(
"""
import json
from openai_codex import Codex
with Codex() as codex:
thread = codex.thread_start()
result = thread.run("Say hello in one sentence.")
print(json.dumps({
"thread_id": thread.id,
"final_response": result.final_response,
"items_count": len(result.items),
}))
"""
),
)
assert {
"thread_id_is_text": isinstance(data["thread_id"], str) and bool(data["thread_id"].strip()),
"final_response_is_text": isinstance(data["final_response"], str)
and bool(data["final_response"].strip()),
"items_count_is_int": isinstance(data["items_count"], int),
} == {
"thread_id_is_text": True,
"final_response_is_text": True,
"items_count_is_int": True,
}
def test_real_async_thread_turn_usage_and_ids_smoke(
runtime_env: PreparedRuntimeEnv,
) -> None: