## Why Cloud-hosted sessions need a way for the service that starts or manages a thread to provide session-owned config without treating all config as if it came from the same user/project/workspace TOML stack. The important boundary is ownership: some values should be controlled by the session/orchestrator, some by the authenticated user, and later some may come from the executor. The earlier broad config-store shape made that boundary too fuzzy and overlapped heavily with the existing filesystem-backed config loader. This PR starts with the smaller piece we need now: a typed session config loader that can feed the existing config layer stack while preserving the normal precedence and merge behavior. ## What Changed - Added `ThreadConfigLoader` and related typed payloads in `codex-config`. - `SessionThreadConfig` currently supports `model_provider`, `model_providers`, and feature flags. - `UserThreadConfig` is present as an ownership boundary, but does not yet add TOML-backed fields. - `NoopThreadConfigLoader` preserves existing behavior when no external loader is configured. - `StaticThreadConfigLoader` supports tests and simple callers. - Taught thread config sources to produce ordinary `ConfigLayerEntry` values so the existing `ConfigLayerStack` remains the place where precedence and merging happen. - Wired the loader through `ConfigBuilder`, the config loader, and app-server startup paths so app-server can provide session-owned config before deriving a thread config. - Added coverage for: - translating typed thread config into config layers, - inserting thread config layers into the stack at the right precedence, - applying session-provided model provider and feature settings when app-server derives config from thread params. ## Follow-Ups This intentionally stops short of adding the remote/service transport. The next pieces are expected to be: 1. Define the proto/API shape for this interface. 2. Add a client implementation that can source session config from the service side. ## Verification - Added unit coverage in `codex-config` for the loader and layer conversion. - Added `codex-core` config loader coverage for thread config layer precedence. - Added app-server coverage that verifies session thread config wins over request-provided config for model provider and feature settings.
codex-app-server-client
Shared in-process app-server client used by conversational CLI surfaces:
codex-execcodex-tui
Purpose
This crate centralizes startup and lifecycle management for an in-process
codex-app-server runtime, so CLI clients do not need to duplicate:
- app-server bootstrap and initialize handshake
- in-memory request/event transport wiring
- lifecycle orchestration around caller-provided startup identity
- graceful shutdown behavior
Startup identity
Callers pass both the app-server SessionSource and the initialize
client_info.name explicitly when starting the facade.
That keeps thread metadata (for example in thread/list and thread/read)
aligned with the originating runtime without baking TUI/exec-specific policy
into the shared client layer.
Transport model
The in-process path uses typed channels:
- client -> server:
ClientRequest/ClientNotification - server -> client:
InProcessServerEventServerRequestServerNotificationLegacyNotification
JSON serialization is still used at external transport boundaries (stdio/websocket), but the in-process hot path is typed.
Typed requests still receive app-server responses through the JSON-RPC result envelope internally. That is intentional: the in-process path is meant to preserve app-server semantics while removing the process boundary, not to introduce a second response contract.
Bootstrap behavior
The client facade starts an already-initialized in-process runtime, but thread bootstrap still follows normal app-server flow:
- caller sends
thread/startorthread/resume - app-server returns the immediate typed response
- richer session metadata may arrive later as a
SessionConfiguredlegacy event
Surfaces such as TUI and exec may therefore need a short bootstrap phase where they reconcile startup response data with later events.
Backpressure and shutdown
- Queues are bounded and use
DEFAULT_IN_PROCESS_CHANNEL_CAPACITYby default. - Full queues return explicit overload behavior instead of unbounded growth.
shutdown()performs a bounded graceful shutdown and then aborts if timeout is exceeded.
If the client falls behind on event consumption, the worker emits
InProcessServerEvent::Lagged and may reject pending server requests so
approval flows do not hang indefinitely behind a saturated queue.