fix(app-server): replay token usage after resume and fork (#18023)

## Problem

When a user resumed or forked a session, the TUI could render the
restored thread history immediately, but it did not receive token usage
until a later model turn emitted a fresh usage event. That left the
context/status UI blank or stale during the exact window where the user
expects resumed state to look complete. Core already reconstructed token
usage from the rollout; the missing behavior was app-server lifecycle
replay to the client that just attached.

## Mental model

Token usage has two representations. The rollout is the durable source
of historical `TokenCount` events, and the core session cache is the
in-memory snapshot reconstructed from that rollout on resume or fork.
App-server v2 clients do not read core state directly; they learn about
usage through `thread/tokenUsage/updated`. The fix keeps those roles
separate: core exposes the restored `TokenUsageInfo`, and app-server
sends one targeted notification after a successful `thread/resume` or
`thread/fork` response when that restored snapshot exists.

This notification is not a new model event. It is a replay of
already-persisted state for the client that just attached. That
distinction matters because using the normal core event path here would
risk duplicating `TokenCount` entries in the rollout and making future
resumes count historical usage twice.

## Non-goals

This change does not add a new protocol method or payload shape. It
reuses the existing v2 `thread/tokenUsage/updated` notification and the
TUI’s existing handler for that notification.

This change does not alter how token usage is computed, accumulated,
compacted, or written during turns. It only exposes the token usage that
resume and fork reconstruction already restored.

This change does not broadcast historical usage replay to every
subscribed client. The replay is intentionally scoped to the connection
that requested resume or fork so already-attached clients are not
surprised by an old usage update while they may be rendering live
activity.

## Tradeoffs

Sending the usage notification after the JSON-RPC response preserves a
clear lifecycle order: the client first receives the thread object, then
receives restored usage for that thread. The tradeoff is that usage is
still a notification rather than part of the `thread/resume` or
`thread/fork` response. That keeps the protocol shape stable and avoids
duplicating usage fields across response types, but clients must
continue listening for notifications after receiving the response.

The helper selects the latest non-in-progress turn id for the replayed
usage notification. This is conservative because restored usage belongs
to completed persisted accounting, not to newly attached in-flight work.
The fallback to the last turn preserves a stable wire payload for
unusual histories, but histories with no meaningful completed turn still
have a weak attribution story.

## Architecture

Core already seeds `Session` token state from the last persisted rollout
`TokenCount` during `InitialHistory::Resumed` and
`InitialHistory::Forked`. The new core accessor exposes the complete
`TokenUsageInfo` through `CodexThread` without giving app-server direct
session mutation authority.

App-server calls that accessor from three lifecycle paths: cold
`thread/resume`, running-thread resume/rejoin, and `thread/fork`. In
each path, the server sends the normal response first, then calls a
shared helper that converts core usage into
`ThreadTokenUsageUpdatedNotification` and sends it only to the
requesting connection.

The tests build fake rollouts with a user turn plus a persisted token
usage event. They then exercise `thread/resume` and `thread/fork`
without starting another model turn, proving that restored usage arrives
before any next-turn token event could be produced.

## Observability

The primary debug path is the app-server JSON-RPC stream. After
`thread/resume` or `thread/fork`, a client should see the response
followed by `thread/tokenUsage/updated` when the source rollout includes
token usage. If the notification is absent, check whether the rollout
contains an `event_msg` payload of type `token_count`, whether core
reconstruction seeded `Session::token_usage_info`, and whether the
connection stayed attached long enough to receive the targeted
notification.

The notification is sent through the existing
`OutgoingMessageSender::send_server_notification_to_connections` path,
so existing app-server tracing around server notifications still
applies. Because this is a replay, not a model turn event, debugging
should start at the resume/fork handlers rather than the turn event
translation in `bespoke_event_handling`.

## Tests

The focused regression coverage is `cargo test -p codex-app-server
emits_restored_token_usage`, which covers both resume and fork. The core
reconstruction guard is `cargo test -p codex-core
record_initial_history_seeds_token_info_from_rollout`.

Formatting and lint/fix passes were run with `just fmt`, `just fix -p
codex-core`, and `just fix -p codex-app-server`. Full crate test runs
surfaced pre-existing unrelated failures in command execution and plugin
marketplace tests; the new token usage tests passed in focused runs and
within the app-server suite before the unrelated command execution
failure.
This commit is contained in:
Felipe Coury
2026-04-16 17:29:34 -03:00
committed by GitHub
parent ea34c6ed8d
commit ec8d4bfc77
10 changed files with 622 additions and 4 deletions

View File

@@ -0,0 +1,133 @@
//! Replays persisted token usage snapshots when a client attaches to an existing thread.
//!
//! The message processor decides when replay is allowed and preserves JSON-RPC response
//! ordering. This module owns notification construction and the attribution rules that
//! map the latest persisted `TokenCount` back to a v2 turn id.
//!
//! Rollout histories can contain explicit turn ids or generated turn ids. When explicit
//! ids do not match the rebuilt thread, replay falls back to the active turn position at
//! the time the `TokenCount` was persisted so the notification still targets the
//! corresponding rebuilt turn.
use std::path::Path;
use std::sync::Arc;
use codex_app_server_protocol::ServerNotification;
use codex_app_server_protocol::Thread;
use codex_app_server_protocol::ThreadHistoryBuilder;
use codex_app_server_protocol::ThreadTokenUsage;
use codex_app_server_protocol::ThreadTokenUsageUpdatedNotification;
use codex_app_server_protocol::TurnStatus;
use codex_core::CodexThread;
use codex_protocol::ThreadId;
use codex_protocol::protocol::EventMsg;
use codex_protocol::protocol::RolloutItem;
use crate::codex_message_processor::read_rollout_items_from_rollout;
use crate::outgoing_message::ConnectionId;
use crate::outgoing_message::OutgoingMessageSender;
/// Sends a restored token usage update to the connection that attached to a thread.
///
/// This is lifecycle replay rather than a model event: the rollout already contains
/// the original `TokenCount`, and emitting through `send_event` here would duplicate
/// persisted usage records. Keeping this helper connection-scoped also avoids
/// surprising other subscribers with a historical usage update while they may be
/// rendering live turn events.
pub(super) async fn send_thread_token_usage_update_to_connection(
outgoing: &Arc<OutgoingMessageSender>,
connection_id: ConnectionId,
thread_id: ThreadId,
thread: &Thread,
conversation: &CodexThread,
token_usage_turn_id: Option<String>,
) {
let Some(info) = conversation.token_usage_info().await else {
return;
};
let notification = ThreadTokenUsageUpdatedNotification {
thread_id: thread_id.to_string(),
turn_id: token_usage_turn_id.unwrap_or_else(|| latest_token_usage_turn_id(thread)),
token_usage: ThreadTokenUsage::from(info),
};
outgoing
.send_server_notification_to_connections(
&[connection_id],
ServerNotification::ThreadTokenUsageUpdated(notification),
)
.await;
}
pub(super) async fn latest_token_usage_turn_id_for_thread_path(thread: &Thread) -> Option<String> {
let rollout_path = thread.path.as_deref()?;
latest_token_usage_turn_id_from_rollout_path(rollout_path, thread).await
}
pub(super) async fn latest_token_usage_turn_id_from_rollout_path(
rollout_path: &Path,
thread: &Thread,
) -> Option<String> {
let rollout_items = read_rollout_items_from_rollout(rollout_path).await.ok()?;
latest_token_usage_turn_id_from_rollout_items(&rollout_items, thread)
}
/// Identifies the turn that was active when a `TokenCount` record appeared.
///
/// The id is preferred when it still appears in the rebuilt thread. The position is a
/// fallback for histories whose implicit turn ids are regenerated during reconstruction.
struct TokenUsageTurnOwner {
id: String,
position: Option<usize>,
}
pub(super) fn latest_token_usage_turn_id_from_rollout_items(
rollout_items: &[RolloutItem],
thread: &Thread,
) -> Option<String> {
let owner = latest_token_usage_turn_owner_from_rollout_items(rollout_items)?;
if thread.turns.iter().any(|turn| turn.id == owner.id) {
return Some(owner.id);
}
owner
.position
.and_then(|position| thread.turns.get(position))
.map(|turn| turn.id.clone())
}
fn latest_token_usage_turn_owner_from_rollout_items(
rollout_items: &[RolloutItem],
) -> Option<TokenUsageTurnOwner> {
let mut builder = ThreadHistoryBuilder::new();
let mut token_usage_turn_owner = None;
for item in rollout_items {
if matches!(item, RolloutItem::EventMsg(EventMsg::TokenCount(_))) {
token_usage_turn_owner =
builder
.active_turn_snapshot()
.map(|turn| TokenUsageTurnOwner {
id: turn.id,
position: builder.active_turn_position(),
});
}
builder.handle_rollout_item(item);
}
token_usage_turn_owner
}
/// Chooses a fallback turn id that should own a replayed token usage update.
///
/// Normal replay derives the owner from the rollout position of the latest
/// `TokenCount` event. This fallback only preserves a stable wire shape for
/// unusual histories where that rollout information cannot be read.
fn latest_token_usage_turn_id(thread: &Thread) -> String {
thread
.turns
.iter()
.rev()
.find(|turn| matches!(turn.status, TurnStatus::Completed | TurnStatus::Failed))
.or_else(|| thread.turns.last())
.map(|turn| turn.id.clone())
.unwrap_or_default()
}