Compare commits

..

28 Commits

Author SHA1 Message Date
Dylan
780d04b4cd Add custom apply patch toggle 2025-09-14 20:05:26 -07:00
Ahmed Ibrahim
2ad6a37192 Don't show the model for apikey (#3607) 2025-09-15 01:32:18 +00:00
Eric Traut
e5dd7f0934 Fix get_auth_status response when using custom provider (#3581)
This PR addresses an edge-case bug that appears in the VS Code extension
in the following situation:
1. Log in using ChatGPT (using either the CLI or extension). This will
create an `auth.json` file.
2. Manually modify `config.toml` to specify a custom provider.
3. Start a fresh copy of the VS Code extension.

The profile menu in the VS Code extension will indicate that you are
logged in using ChatGPT even though you're not.

This is caused by the `get_auth_status` method returning an
`auth_method: 'chatgpt'` when a custom provider is configured and it
doesn't use OpenAI auth (i.e. `requires_openai_auth` is false). The
method should always return `auth_method: None` if
`requires_openai_auth` is false.

The same bug also causes the NUX (new user experience) screen to be
displayed in the VSCE in this situation.
2025-09-14 18:27:02 -07:00
Dylan
b6673838e8 fix: model family and apply_patch consistency (#3603)
## Summary
Resolves a merge conflict between #3597 and #3560, and adds tests to
double check our apply_patch configuration.

## Testing
- [x] Added unit tests

---------

Co-authored-by: dedrisian-oai <dedrisian@openai.com>
2025-09-14 18:20:37 -07:00
Fouad Matin
1823906215 fix(tui): update full-auto to default preset (#3608)
Update `--full-auto` to use default preset
2025-09-14 18:14:11 -07:00
Fouad Matin
5185d69f13 fix(core): flaky test completed_commands_do_not_persist_sessions (#3596)
Fix flaky test:
```
        FAIL [   2.641s] codex-core unified_exec::tests::completed_commands_do_not_persist_sessions
  stdout ───

    running 1 test
    test unified_exec::tests::completed_commands_do_not_persist_sessions ... FAILED

    failures:

    failures:
        unified_exec::tests::completed_commands_do_not_persist_sessions

    test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 235 filtered out; finished in 2.63s
    
  stderr ───

    thread 'unified_exec::tests::completed_commands_do_not_persist_sessions' panicked at core/src/unified_exec/mod.rs:582:9:
    assertion failed: result.output.contains("codex")
```
2025-09-14 18:04:05 -07:00
pakrym-oai
4dffa496ac Skip frames files in codespell (#3606)
Fixes CI
2025-09-14 18:00:23 -07:00
Ahmed Ibrahim
ce984b2c71 Add session header to chat widget (#3592)
<img width="570" height="332" alt="image"
src="https://github.com/user-attachments/assets/ca6dfcb0-f3a1-4b3e-978d-4f844ba77527"
/>
2025-09-14 17:53:50 -07:00
pakrym-oai
c47febf221 Append full raw reasoning event text (#3605)
We don't emit correct delta events and only get full reasoning back.
Append it to history.
2025-09-14 17:50:06 -07:00
jimmyfraiture2
76c37c5493 feat: UI animation (#3590)
Add NUX animation

---------

Co-authored-by: Thibault Sottiaux <tibo@openai.com>
2025-09-14 17:42:17 -07:00
dedrisian-oai
2aa84b8891 Fix EventMsg Optional (#3604) 2025-09-15 00:34:33 +00:00
pakrym-oai
9177bdae5e Only one branch for swiftfox (#3601)
Make each model family have a single branch.
2025-09-14 16:56:22 -07:00
Ahmed Ibrahim
a30e5e40ee enable-resume (#3537)
Adding the ability to resume conversations.
we have one verb `resume`. 

Behavior:

`tui`:
`codex resume`: opens session picker
`codex resume --last`: continue last message
`codex resume <session id>`: continue conversation with `session id`

`exec`:
`codex resume --last`: continue last conversation
`codex resume <session id>`: continue conversation with `session id`

Implementation:
- I added a function to find the path in `~/.codex/sessions/` with a
`UUID`. This is helpful in resuming with session id.
- Added the above mentioned flags
- Added lots of testing
2025-09-14 19:33:19 -04:00
jimmyfraiture2
99e1d33bd1 feat: update model save (#3589)
Edit model save to save by default as global or on the profile depending
on the session
2025-09-14 16:25:43 -07:00
dedrisian-oai
b2f6fc3b9a Fix flaky windows test (#3564)
There are exactly 4 types of flaky tests in Windows x86 right now:

1. `review_input_isolated_from_parent_history` => Times out waiting for
closing events
2. `review_does_not_emit_agent_message_on_structured_output` => Times
out waiting for closing events
3. `auto_compact_runs_after_token_limit_hit` => Times out waiting for
closing events
4. `auto_compact_runs_after_token_limit_hit` => Also has a problem where
auto compact should add a third request, but receives 4 requests.

1, 2, and 3 seem to be solved with increasing threads on windows runner
from 2 -> 4.

Don't know yet why # 4 is happening, but probably also because of
WireMock issues on windows causing races.
2025-09-14 23:20:25 +00:00
pakrym-oai
51f88fd04a Fix swiftfox model selector (#3598)
The model shouldn't be saved with a suffix. The effort is a separate
field.
2025-09-14 23:12:21 +00:00
pakrym-oai
916fdc2a37 Add per-model-family prompts (#3597)
Allows more flexibility in defining prompts.
2025-09-14 22:45:15 +00:00
pakrym-oai
863d9c237e Include command output when sending timeout to model (#3576)
Being able to see the output helps the model decide how to handle the
timeout.
2025-09-14 14:38:26 -07:00
Ahmed Ibrahim
7e1543f5d8 Align user history message prefix width (#3467)
<img width="798" height="340" alt="image"
src="https://github.com/user-attachments/assets/fdd63f40-9c94-4e3a-bce5-2d2f333a384f"
/>
2025-09-14 20:51:08 +00:00
Ahmed Ibrahim
d701eb32d7 Gate model upgrade prompt behind ChatGPT auth (#3586)
- refresh the login_state after onboarding.
- should be on chatgpt for upgrade
2025-09-14 13:08:24 -07:00
Michael Bolin
9baae77533 chore: update output_lines() to take a struct instead of a sequence of bools (#3591)
I found the boolean literals hard to follow.
2025-09-14 13:07:38 -07:00
Ahmed Ibrahim
e932722292 Add spacing before queued status indicator messages (#3474)
<img width="687" height="174" alt="image"
src="https://github.com/user-attachments/assets/e68f5a29-cb2d-4aa6-9cbd-f492878d8d0a"
/>
2025-09-14 15:37:28 -04:00
Ahmed Ibrahim
bbea6bbf7e Handle resuming/forking after compact (#3533)
We need to construct the history different when compact happens. For
this, we need to just consider the history after compact and convert
compact to a response item.

This needs to change and use `build_compact_history` when this #3446 is
merged.
2025-09-14 13:23:31 +00:00
Jeremy Rose
4891ee29c5 refactor transcript view to handle HistoryCells (#3538)
No (intended) functional change.

This refactors the transcript view to hold a list of HistoryCells
instead of a list of Lines. This simplifies and makes much of the logic
more robust, as well as laying the groundwork for future changes, e.g.
live-updating history cells in the transcript.

Similar to #2879 in goal. Fixes #2755.
2025-09-13 19:23:14 -07:00
Thibault Sottiaux
bac8a427f3 chore: default swiftfox models to experimental reasoning summaries (#3560) 2025-09-13 23:40:54 +00:00
Thibault Sottiaux
14ab1063a7 chore: rename 2025-09-12 23:17:41 -07:00
Thibault Sottiaux
a77364bbaa chore: remove descriptions 2025-09-12 22:55:40 -07:00
Thibault Sottiaux
19b4ed3c96 w 2025-09-12 22:44:05 -07:00
367 changed files with 11132 additions and 1293 deletions

View File

@@ -25,3 +25,4 @@ jobs:
uses: codespell-project/actions-codespell@406322ec52dd7b488e48c1c4b82e2a8b3a1bf630 # v2
with:
ignore_words_file: .codespellignore
skip: frame*.txt

3
codex-rs/Cargo.lock generated
View File

@@ -666,6 +666,7 @@ dependencies = [
"bytes",
"chrono",
"codex-apply-patch",
"codex-file-search",
"codex-mcp-client",
"codex-protocol",
"core_test_support",
@@ -733,6 +734,8 @@ dependencies = [
"tokio",
"tracing",
"tracing-subscriber",
"uuid",
"walkdir",
"wiremock",
]

View File

@@ -73,6 +73,9 @@ enum Subcommand {
#[clap(visible_alias = "a")]
Apply(ApplyCommand),
/// Resume a previous interactive session (picker by default; use --last to continue the most recent).
Resume(ResumeCommand),
/// Internal: generate TypeScript protocol bindings.
#[clap(hide = true)]
GenerateTs(GenerateTsCommand),
@@ -85,6 +88,18 @@ struct CompletionCommand {
shell: Shell,
}
#[derive(Debug, Parser)]
struct ResumeCommand {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
session_id: Option<String>,
/// Continue the most recent session without showing the picker.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
last: bool,
}
#[derive(Debug, Parser)]
struct DebugArgs {
#[command(subcommand)]
@@ -143,26 +158,54 @@ fn main() -> anyhow::Result<()> {
}
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let cli = MultitoolCli::parse();
let MultitoolCli {
config_overrides: root_config_overrides,
mut interactive,
subcommand,
} = MultitoolCli::parse();
match cli.subcommand {
match subcommand {
None => {
let mut tui_cli = cli.interactive;
prepend_config_flags(&mut tui_cli.config_overrides, cli.config_overrides);
let usage = codex_tui::run_main(tui_cli, codex_linux_sandbox_exe).await?;
prepend_config_flags(
&mut interactive.config_overrides,
root_config_overrides.clone(),
);
let usage = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
if !usage.is_zero() {
println!("{}", codex_core::protocol::FinalOutput::from(usage));
}
}
Some(Subcommand::Exec(mut exec_cli)) => {
prepend_config_flags(&mut exec_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut exec_cli.config_overrides,
root_config_overrides.clone(),
);
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
codex_mcp_server::run_main(codex_linux_sandbox_exe, root_config_overrides.clone())
.await?;
}
Some(Subcommand::Resume(ResumeCommand { session_id, last })) => {
// Start with the parsed interactive CLI so resume shares the same
// configuration surface area as `codex` without additional flags.
let resume_session_id = session_id;
interactive.resume_picker = resume_session_id.is_none() && !last;
interactive.resume_last = last;
interactive.resume_session_id = resume_session_id;
// Propagate any root-level config overrides (e.g. `-c key=value`).
prepend_config_flags(
&mut interactive.config_overrides,
root_config_overrides.clone(),
);
codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut login_cli.config_overrides,
root_config_overrides.clone(),
);
match login_cli.action {
Some(LoginSubcommand::Status) => {
run_login_status(login_cli.config_overrides).await;
@@ -177,11 +220,17 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
}
Some(Subcommand::Logout(mut logout_cli)) => {
prepend_config_flags(&mut logout_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut logout_cli.config_overrides,
root_config_overrides.clone(),
);
run_logout(logout_cli.config_overrides).await;
}
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(&mut proto_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut proto_cli.config_overrides,
root_config_overrides.clone(),
);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
@@ -189,7 +238,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
DebugCommand::Seatbelt(mut seatbelt_cli) => {
prepend_config_flags(&mut seatbelt_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut seatbelt_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_seatbelt(
seatbelt_cli,
codex_linux_sandbox_exe,
@@ -197,7 +249,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
.await?;
}
DebugCommand::Landlock(mut landlock_cli) => {
prepend_config_flags(&mut landlock_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut landlock_cli.config_overrides,
root_config_overrides.clone(),
);
codex_cli::debug_sandbox::run_command_under_landlock(
landlock_cli,
codex_linux_sandbox_exe,
@@ -206,7 +261,10 @@ async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()
}
},
Some(Subcommand::Apply(mut apply_cli)) => {
prepend_config_flags(&mut apply_cli.config_overrides, cli.config_overrides);
prepend_config_flags(
&mut apply_cli.config_overrides,
root_config_overrides.clone(),
);
run_apply_command(apply_cli, None).await?;
}
Some(Subcommand::GenerateTs(gen_cli)) => {

View File

@@ -1,4 +1,6 @@
use codex_core::config::SWIFTFOX_MEDIUM_MODEL;
use codex_core::protocol_config_types::ReasoningEffort;
use codex_protocol::mcp_protocol::AuthMode;
/// A simple preset pairing a model slug with a reasoning effort.
#[derive(Debug, Clone, Copy)]
@@ -15,47 +17,65 @@ pub struct ModelPreset {
pub effort: Option<ReasoningEffort>,
}
/// Built-in list of model presets that pair a model with a reasoning effort.
///
/// Keep this UI-agnostic so it can be reused by both TUI and MCP server.
pub fn builtin_model_presets() -> &'static [ModelPreset] {
// Order reflects effort from minimal to high.
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-high-new",
label: "gpt-5 high new",
description: "— our latest release tuned to rely on the model's built-in reasoning defaults",
model: "gpt-5-high-new",
effort: None,
},
];
PRESETS
const PRESETS: &[ModelPreset] = &[
ModelPreset {
id: "swiftfox-low",
label: "swiftfox low",
description: "",
model: "swiftfox",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "swiftfox-medium",
label: "swiftfox medium",
description: "",
model: "swiftfox",
effort: None,
},
ModelPreset {
id: "swiftfox-high",
label: "swiftfox high",
description: "",
model: "swiftfox",
effort: Some(ReasoningEffort::High),
},
ModelPreset {
id: "gpt-5-minimal",
label: "gpt-5 minimal",
description: "— fastest responses with limited reasoning; ideal for coding, instructions, or lightweight tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Minimal),
},
ModelPreset {
id: "gpt-5-low",
label: "gpt-5 low",
description: "— balances speed with some reasoning; useful for straightforward queries and short explanations",
model: "gpt-5",
effort: Some(ReasoningEffort::Low),
},
ModelPreset {
id: "gpt-5-medium",
label: "gpt-5 medium",
description: "— default setting; provides a solid balance of reasoning depth and latency for general-purpose tasks",
model: "gpt-5",
effort: Some(ReasoningEffort::Medium),
},
ModelPreset {
id: "gpt-5-high",
label: "gpt-5 high",
description: "— maximizes reasoning depth for complex or ambiguous problems",
model: "gpt-5",
effort: Some(ReasoningEffort::High),
},
];
pub fn builtin_model_presets(auth_mode: Option<AuthMode>) -> Vec<ModelPreset> {
match auth_mode {
Some(AuthMode::ApiKey) => PRESETS
.iter()
.copied()
.filter(|p| !p.model.contains(SWIFTFOX_MEDIUM_MODEL))
.collect(),
_ => PRESETS.to_vec(),
}
}

View File

@@ -19,6 +19,7 @@ base64 = "0.22"
bytes = "1.10.1"
chrono = { version = "0.4", features = ["serde"] }
codex-apply-patch = { path = "../apply-patch" }
codex-file-search = { path = "../file-search" }
codex-mcp-client = { path = "../mcp-client" }
codex-protocol = { path = "../protocol" }
dirs = "6"

View File

@@ -10,15 +10,12 @@ use codex_protocol::models::ResponseItem;
use futures::Stream;
use serde::Serialize;
use std::borrow::Cow;
use std::ops::Deref;
use std::pin::Pin;
use std::task::Context;
use std::task::Poll;
use tokio::sync::mpsc;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
/// Review thread system prompt. Edit `core/src/review_prompt.md` to customize.
pub const REVIEW_PROMPT: &str = include_str!("../review_prompt.md");
@@ -41,11 +38,12 @@ impl Prompt {
let base = self
.base_instructions_override
.as_deref()
.unwrap_or(BASE_INSTRUCTIONS);
.unwrap_or(model.base_instructions.deref());
let mut sections: Vec<&str> = vec![base];
// When there are no custom instructions, add apply_patch_tool_instructions if either:
// - the model needs special instructions (4.1), or
// When there are no custom instructions, add apply_patch_tool_instructions if:
// - the model needs special instructions (4.1)
// AND
// - there is no apply_patch tool present
let is_apply_patch_tool_present = self.tools.iter().any(|tool| match tool {
OpenAiTool::Function(f) => f.name == "apply_patch",
@@ -53,7 +51,8 @@ impl Prompt {
_ => false,
});
if self.base_instructions_override.is_none()
&& (model.needs_special_apply_patch_instructions || !is_apply_patch_tool_present)
&& model.needs_special_apply_patch_instructions
&& !is_apply_patch_tool_present
{
sections.push(APPLY_PATCH_TOOL_INSTRUCTIONS);
}
@@ -177,18 +176,64 @@ impl Stream for ResponseStream {
#[cfg(test)]
mod tests {
use crate::model_family::find_family_for_model;
use pretty_assertions::assert_eq;
use super::*;
struct InstructionsTestCase {
pub slug: &'static str,
pub expects_apply_patch_instructions: bool,
}
#[test]
fn get_full_instructions_no_user_content() {
let prompt = Prompt {
..Default::default()
};
let expected = format!("{BASE_INSTRUCTIONS}\n{APPLY_PATCH_TOOL_INSTRUCTIONS}");
let model_family = find_family_for_model("gpt-4.1").expect("known model slug");
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
let test_cases = vec![
InstructionsTestCase {
slug: "gpt-3.5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4.1",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-4o",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-5",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "codex-mini-latest",
expects_apply_patch_instructions: true,
},
InstructionsTestCase {
slug: "gpt-oss:120b",
expects_apply_patch_instructions: false,
},
InstructionsTestCase {
slug: "swiftfox",
expects_apply_patch_instructions: false,
},
];
for test_case in test_cases {
let model_family = find_family_for_model(test_case.slug).expect("known model slug");
let expected = if test_case.expects_apply_patch_instructions {
format!(
"{}\n{}",
model_family.clone().base_instructions,
APPLY_PATCH_TOOL_INSTRUCTIONS
)
} else {
model_family.clone().base_instructions
};
let full = prompt.get_full_instructions(&model_family);
assert_eq!(full, expected);
}
}
#[test]

View File

@@ -18,6 +18,7 @@ use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::protocol::ConversationPathResponseEvent;
use codex_protocol::protocol::ExitedReviewModeEvent;
use codex_protocol::protocol::ReviewRequest;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::TaskStartedEvent;
@@ -128,9 +129,10 @@ use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::protocol::InitialHistory;
use uuid::Uuid;
mod compact;
use self::compact::build_compacted_history;
use self::compact::collect_user_messages;
// A convenience extension trait for acquiring mutex locks where poisoning is
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
@@ -206,7 +208,7 @@ impl Codex {
config.clone(),
auth_manager.clone(),
tx_event.clone(),
conversation_history.clone(),
conversation_history,
)
.await
.map_err(|e| {
@@ -565,9 +567,10 @@ impl Session {
let persist = matches!(conversation_history, InitialHistory::Forked(_));
// Always add response items to conversation history
let response_items = conversation_history.get_response_items();
if !response_items.is_empty() {
self.record_into_history(&response_items);
let reconstructed_history =
self.reconstruct_history_from_rollout(turn_context, &rollout_items);
if !reconstructed_history.is_empty() {
self.record_into_history(&reconstructed_history);
}
// If persisting, persist all rollout items as-is (recorder filters)
@@ -679,6 +682,33 @@ impl Session {
self.persist_rollout_response_items(items).await;
}
fn reconstruct_history_from_rollout(
&self,
turn_context: &TurnContext,
rollout_items: &[RolloutItem],
) -> Vec<ResponseItem> {
let mut history = ConversationHistory::new();
for item in rollout_items {
match item {
RolloutItem::ResponseItem(response_item) => {
history.record_items(std::iter::once(response_item));
}
RolloutItem::Compacted(compacted) => {
let snapshot = history.contents();
let user_messages = collect_user_messages(&snapshot);
let rebuilt = build_compacted_history(
self.build_initial_context(turn_context),
&user_messages,
&compacted.message,
);
history.replace(rebuilt);
}
_ => {}
}
}
history.contents()
}
/// Append ResponseItems to the in-memory conversation history only.
fn record_into_history(&self, items: &[ResponseItem]) {
self.state
@@ -771,7 +801,6 @@ impl Session {
command_for_display,
cwd,
apply_patch,
user_initiated_shell_command,
} = exec_command_context;
let msg = match apply_patch {
Some(ApplyPatchCommandContext {
@@ -794,7 +823,6 @@ impl Session {
.into_iter()
.map(Into::into)
.collect(),
user_initiated_shell_command,
}),
};
let event = Event {
@@ -818,6 +846,7 @@ impl Session {
aggregated_output,
duration,
exit_code,
timed_out: _,
} = output;
// Send full stdout/stderr to clients; do not truncate.
let stdout = stdout.text.clone();
@@ -893,6 +922,7 @@ impl Session {
let output_stderr;
let borrowed: &ExecToolCallOutput = match &result {
Ok(output) => output,
Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) => output,
Err(e) => {
output_stderr = ExecToolCallOutput {
exit_code: -1,
@@ -900,6 +930,7 @@ impl Session {
stderr: StreamOutput::new(get_error_message_ui(e)),
aggregated_output: StreamOutput::new(get_error_message_ui(e)),
duration: Duration::default(),
timed_out: false,
};
&output_stderr
}
@@ -1032,7 +1063,6 @@ pub(crate) struct ExecCommandContext {
pub(crate) command_for_display: Vec<String>,
pub(crate) cwd: PathBuf,
pub(crate) apply_patch: Option<ApplyPatchCommandContext>,
pub(crate) user_initiated_shell_command: bool,
}
#[derive(Clone, Debug)]
@@ -1478,101 +1508,6 @@ async fn submission_loop(
};
sess.send_event(event).await;
}
Op::RunUserShellCommand { command } => {
// Spawn a cancellable one-off shell command task so we can process
// further Ops (e.g., Interrupt) while it runs.
let sess_clone = sess.clone();
let turn_context = Arc::clone(&turn_context);
let sub_id = sub.id.clone();
let handle = tokio::spawn(async move {
// Announce a running task so the UI can show a spinner and block input.
let event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: turn_context.client.get_model_context_window(),
}),
};
sess_clone.send_event(event).await;
// Build a shell invocation in the user's default shell.
let shell_invocation = sess_clone
.user_shell
// Why we pass a ["bash", "-lc", <script>] sentinel instead of the raw command:
// - The shell adapter (core/src/shell.rs) first calls `strip_bash_lc`. When it sees this
// exact shape it extracts <script> and then builds the correct argv for the user shell
// (e.g., `/bin/zsh -lc "source ~/.zshrc && (<script>)"`).
// - If we pass the whole command as a single string (e.g., ["cat Cargo.toml | wc -l"]) the
// adapter may quote it when joining/embedding, and shells can treat the entire value as a
// single program name or a single quoted token.
.format_default_shell_invocation(vec![
"bash".to_string(),
"-lc".to_string(),
command.clone(),
])
.unwrap_or_else(|| vec![command.clone()]);
let params = ExecParams {
command: shell_invocation.clone(),
cwd: turn_context.cwd.clone(),
timeout_ms: None,
env: create_env(&turn_context.shell_environment_policy),
with_escalated_permissions: None,
justification: None,
};
// Use a fresh diff tracker (no patch application expected for ! commands).
let mut turn_diff_tracker = TurnDiffTracker::new();
// Initiated by user, not by the model. Hence, we generate a new call_id.
let call_id = format!("call_{}", Uuid::new_v4());
let exec_ctx = ExecCommandContext {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
command_for_display: shell_invocation,
cwd: params.cwd.clone(),
apply_patch: None,
user_initiated_shell_command: true,
};
// Run without sandboxing or approval — this is a user-initiated command.
// Output is not captured as it's sent to the TUI inside `run_exec_with_events`.
let _ = sess_clone
.run_exec_with_events(
&mut turn_diff_tracker,
exec_ctx,
ExecInvokeArgs {
params,
sandbox_type: SandboxType::None,
sandbox_policy: &turn_context.sandbox_policy,
codex_linux_sandbox_exe: &sess_clone.codex_linux_sandbox_exe,
stdout_stream: Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess_clone.tx_event.clone(),
}),
},
)
.await;
// Signal completion so the UI regains control.
let complete = Event {
id: sub_id.clone(),
msg: EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: None,
}),
};
sess_clone.send_event(complete).await;
})
.abort_handle();
// Track this as the current task so Interrupt can abort it.
sess.set_task(AgentTask {
sess: sess.clone(),
sub_id: sub.id,
handle,
kind: AgentTaskKind::Regular,
});
}
Op::Review { review_request } => {
spawn_review_thread(
sess.clone(),
@@ -2912,7 +2847,6 @@ async fn handle_container_exec_with_params(
changes: convert_apply_patch_to_protocol(&action),
},
),
user_initiated_shell_command: false,
};
let params = maybe_translate_shell_command(params, sess, turn_context);
@@ -2987,15 +2921,12 @@ async fn handle_sandbox_error(
let sub_id = exec_command_context.sub_id.clone();
let cwd = exec_command_context.cwd.clone();
// if the command timed out, we can simply return this failure to the model
if matches!(error, SandboxErr::Timeout) {
if let SandboxErr::Timeout { output } = &error {
let content = format_exec_output(output);
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!(
"command timed out after {} milliseconds",
params.timeout_duration().as_millis()
),
content,
success: Some(false),
},
};
@@ -3120,7 +3051,17 @@ fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let s = aggregated_output.text.as_str();
let mut s = &aggregated_output.text;
let prefixed_str: String;
if exec_output.timed_out {
prefixed_str = format!(
"command timed out after {} milliseconds\n",
exec_output.duration.as_millis()
) + s;
s = &prefixed_str;
}
let total_lines = s.lines().count();
if s.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return s.to_string();
@@ -3163,6 +3104,7 @@ fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
// Build final string respecting byte budgets
let head_part = take_bytes_at_char_boundary(&head_lines_text, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(s.len()));
result.push_str(head_part);
result.push_str(&marker);
@@ -3308,11 +3250,11 @@ fn convert_call_tool_result_to_function_call_output_payload(
async fn exit_review_mode(
session: Arc<Session>,
task_sub_id: String,
res: Option<ReviewOutputEvent>,
review_output: Option<ReviewOutputEvent>,
) {
let event = Event {
id: task_sub_id,
msg: EventMsg::ExitedReviewMode(res),
msg: EventMsg::ExitedReviewMode(ExitedReviewModeEvent { review_output }),
};
session.send_event(event).await;
}
@@ -3320,18 +3262,59 @@ async fn exit_review_mode(
#[cfg(test)]
mod tests {
use super::*;
use crate::config::ConfigOverrides;
use crate::config::ConfigToml;
use crate::protocol::CompactedItem;
use crate::protocol::InitialHistory;
use crate::protocol::ResumedHistory;
use codex_protocol::models::ContentItem;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration as StdDuration;
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
annotations: None,
text: s.to_string(),
r#type: "text".to_string(),
})
#[test]
fn reconstruct_history_matches_live_compactions() {
let (session, turn_context) = make_session_and_context();
let (rollout_items, expected) = sample_rollout(&session, &turn_context);
let reconstructed = session.reconstruct_history_from_rollout(&turn_context, &rollout_items);
assert_eq!(expected, reconstructed);
}
#[test]
fn record_initial_history_reconstructs_resumed_transcript() {
let (session, turn_context) = make_session_and_context();
let (rollout_items, expected) = sample_rollout(&session, &turn_context);
tokio_test::block_on(session.record_initial_history(
&turn_context,
InitialHistory::Resumed(ResumedHistory {
conversation_id: ConversationId::default(),
history: rollout_items,
rollout_path: PathBuf::from("/tmp/resume.jsonl"),
}),
));
let actual = session.state.lock_unchecked().history.contents();
assert_eq!(expected, actual);
}
#[test]
fn record_initial_history_reconstructs_forked_transcript() {
let (session, turn_context) = make_session_and_context();
let (rollout_items, expected) = sample_rollout(&session, &turn_context);
tokio_test::block_on(
session.record_initial_history(&turn_context, InitialHistory::Forked(rollout_items)),
);
let actual = session.state.lock_unchecked().history.contents();
assert_eq!(expected, actual);
}
#[test]
@@ -3371,6 +3354,7 @@ mod tests {
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full),
duration: StdDuration::from_secs(1),
timed_out: false,
};
let out = format_exec_output_str(&exec);
@@ -3413,6 +3397,7 @@ mod tests {
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
timed_out: false,
};
let out = format_exec_output_str(&exec);
@@ -3435,6 +3420,25 @@ mod tests {
);
}
#[test]
fn includes_timed_out_message() {
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new("Command output".to_string()),
duration: StdDuration::from_secs(1),
timed_out: true,
};
let out = format_exec_output_str(&exec);
assert_eq!(
out,
"command timed out after 1000 milliseconds\nCommand output"
);
}
#[test]
fn falls_back_to_content_when_structured_is_null() {
let ctr = CallToolResult {
@@ -3486,4 +3490,174 @@ mod tests {
assert_eq!(expected, got);
}
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
annotations: None,
text: s.to_string(),
r#type: "text".to_string(),
})
}
fn make_session_and_context() -> (Session, TurnContext) {
let (tx_event, _rx_event) = async_channel::unbounded();
let codex_home = tempfile::tempdir().expect("create temp dir");
let config = Config::load_from_base_config_with_overrides(
ConfigToml::default(),
ConfigOverrides::default(),
codex_home.path().to_path_buf(),
)
.expect("load default test config");
let config = Arc::new(config);
let conversation_id = ConversationId::default();
let client = ModelClient::new(
config.clone(),
None,
config.model_provider.clone(),
config.model_reasoning_effort,
config.model_reasoning_summary,
conversation_id,
);
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
});
let turn_context = TurnContext {
client,
cwd: config.cwd.clone(),
base_instructions: config.base_instructions.clone(),
user_instructions: config.user_instructions.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
shell_environment_policy: config.shell_environment_policy.clone(),
tools_config,
is_review_mode: false,
};
let session = Session {
conversation_id,
tx_event,
mcp_connection_manager: McpConnectionManager::default(),
session_manager: ExecSessionManager::default(),
unified_exec_manager: UnifiedExecSessionManager::default(),
notify: None,
rollout: Mutex::new(None),
state: Mutex::new(State {
history: ConversationHistory::new(),
..Default::default()
}),
codex_linux_sandbox_exe: None,
user_shell: shell::Shell::Unknown,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
};
(session, turn_context)
}
fn sample_rollout(
session: &Session,
turn_context: &TurnContext,
) -> (Vec<RolloutItem>, Vec<ResponseItem>) {
let mut rollout_items = Vec::new();
let mut live_history = ConversationHistory::new();
let initial_context = session.build_initial_context(turn_context);
for item in &initial_context {
rollout_items.push(RolloutItem::ResponseItem(item.clone()));
}
live_history.record_items(initial_context.iter());
let user1 = ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "first user".to_string(),
}],
};
live_history.record_items(std::iter::once(&user1));
rollout_items.push(RolloutItem::ResponseItem(user1.clone()));
let assistant1 = ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "assistant reply one".to_string(),
}],
};
live_history.record_items(std::iter::once(&assistant1));
rollout_items.push(RolloutItem::ResponseItem(assistant1.clone()));
let summary1 = "summary one";
let snapshot1 = live_history.contents();
let user_messages1 = collect_user_messages(&snapshot1);
let rebuilt1 = build_compacted_history(
session.build_initial_context(turn_context),
&user_messages1,
summary1,
);
live_history.replace(rebuilt1);
rollout_items.push(RolloutItem::Compacted(CompactedItem {
message: summary1.to_string(),
}));
let user2 = ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "second user".to_string(),
}],
};
live_history.record_items(std::iter::once(&user2));
rollout_items.push(RolloutItem::ResponseItem(user2.clone()));
let assistant2 = ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "assistant reply two".to_string(),
}],
};
live_history.record_items(std::iter::once(&assistant2));
rollout_items.push(RolloutItem::ResponseItem(assistant2.clone()));
let summary2 = "summary two";
let snapshot2 = live_history.contents();
let user_messages2 = collect_user_messages(&snapshot2);
let rebuilt2 = build_compacted_history(
session.build_initial_context(turn_context),
&user_messages2,
summary2,
);
live_history.replace(rebuilt2);
rollout_items.push(RolloutItem::Compacted(CompactedItem {
message: summary2.to_string(),
}));
let user3 = ResponseItem::Message {
id: None,
role: "user".to_string(),
content: vec![ContentItem::InputText {
text: "third user".to_string(),
}],
};
live_history.record_items(std::iter::once(&user3));
rollout_items.push(RolloutItem::ResponseItem(user3.clone()));
let assistant3 = ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: "assistant reply three".to_string(),
}],
};
live_history.record_items(std::iter::once(&assistant3));
rollout_items.push(RolloutItem::ResponseItem(assistant3.clone()));
(rollout_items, live_history.contents())
}
}

View File

@@ -176,8 +176,8 @@ async fn run_compact_task_inner(
};
let summary_text = get_last_assistant_message_from_turn(&history_snapshot).unwrap_or_default();
let user_messages = collect_user_messages(&history_snapshot);
let new_history =
build_compacted_history(&sess, turn_context.as_ref(), &user_messages, &summary_text);
let initial_context = sess.build_initial_context(turn_context.as_ref());
let new_history = build_compacted_history(initial_context, &user_messages, &summary_text);
{
let mut state = sess.state.lock_unchecked();
state.history.replace(new_history);
@@ -223,7 +223,7 @@ fn content_items_to_text(content: &[ContentItem]) -> Option<String> {
}
}
fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
pub(crate) fn collect_user_messages(items: &[ResponseItem]) -> Vec<String> {
items
.iter()
.filter_map(|item| match item {
@@ -243,13 +243,12 @@ fn is_session_prefix_message(text: &str) -> bool {
)
}
fn build_compacted_history(
sess: &Session,
turn_context: &TurnContext,
pub(crate) fn build_compacted_history(
initial_context: Vec<ResponseItem>,
user_messages: &[String],
summary_text: &str,
) -> Vec<ResponseItem> {
let mut history = sess.build_initial_context(turn_context);
let mut history = initial_context;
let user_messages_text = if user_messages.is_empty() {
"(none)".to_string()
} else {

View File

@@ -9,6 +9,7 @@ use crate::config_types::Tui;
use crate::config_types::UriBasedFileOpener;
use crate::git_info::resolve_root_git_project_for_trust;
use crate::model_family::ModelFamily;
use crate::model_family::derive_default_model_family;
use crate::model_family::find_family_for_model;
use crate::model_provider_info::ModelProviderInfo;
use crate::model_provider_info::built_in_model_providers;
@@ -33,7 +34,8 @@ use toml_edit::DocumentMut;
const OPENAI_DEFAULT_MODEL: &str = "gpt-5";
const OPENAI_DEFAULT_REVIEW_MODEL: &str = "gpt-5";
pub const GPT5_HIGH_MODEL: &str = "gpt-5-high-new";
pub const SWIFTFOX_MEDIUM_MODEL: &str = "swiftfox";
pub const SWIFTFOX_MODEL_DISPLAY_NAME: &str = "swiftfox-medium";
/// Maximum number of bytes of the documentation that will be embedded. Larger
/// files are *silently truncated* to this size so we do not take up too much of
@@ -159,9 +161,6 @@ pub struct Config {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: String,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Include an experimental plan tool that the model can use to update its current plan and status of each step.
pub include_plan_tool: bool,
@@ -601,9 +600,6 @@ pub struct ConfigToml {
/// Base URL for requests to ChatGPT (as opposed to the OpenAI API).
pub chatgpt_base_url: Option<String>,
/// Experimental rollout resume path (absolute path to .jsonl; undocumented).
pub experimental_resume: Option<PathBuf>,
/// Experimental path to a file whose contents replace the built-in BASE_INSTRUCTIONS.
pub experimental_instructions_file: Option<PathBuf>,
@@ -865,15 +861,8 @@ impl Config {
.or(cfg.model)
.unwrap_or_else(default_model);
let mut model_family = find_family_for_model(&model).unwrap_or_else(|| ModelFamily {
slug: model.clone(),
family: model.clone(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
});
let mut model_family =
find_family_for_model(&model).unwrap_or_else(|| derive_default_model_family(&model));
if let Some(supports_reasoning_summaries) = cfg.model_supports_reasoning_summaries {
model_family.supports_reasoning_summaries = supports_reasoning_summaries;
@@ -897,8 +886,6 @@ impl Config {
.and_then(|info| info.auto_compact_token_limit)
});
let experimental_resume = cfg.experimental_resume;
// Load base instructions override from a file if specified. If the
// path is relative, resolve it against the effective cwd so the
// behaviour matches other path-like config values.
@@ -959,8 +946,6 @@ impl Config {
.chatgpt_base_url
.or(cfg.chatgpt_base_url)
.unwrap_or("https://chatgpt.com/backend-api/".to_string()),
experimental_resume,
include_plan_tool: include_plan_tool.unwrap_or(false),
include_apply_patch_tool: include_apply_patch_tool.unwrap_or(false),
tools_web_search_request,
@@ -1184,7 +1169,7 @@ exclude_slash_tmp = true
persist_model_selection(
codex_home.path(),
None,
"gpt-5-high-new",
"swiftfox",
Some(ReasoningEffort::High),
)
.await?;
@@ -1193,7 +1178,7 @@ exclude_slash_tmp = true
tokio::fs::read_to_string(codex_home.path().join(CONFIG_TOML_FILE)).await?;
let parsed: ConfigToml = toml::from_str(&serialized)?;
assert_eq!(parsed.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(parsed.model.as_deref(), Some("swiftfox"));
assert_eq!(parsed.model_reasoning_effort, Some(ReasoningEffort::High));
Ok(())
@@ -1247,8 +1232,8 @@ model = "gpt-4.1"
persist_model_selection(
codex_home.path(),
Some("dev"),
"gpt-5-high-new",
Some(ReasoningEffort::Low),
"swiftfox",
Some(ReasoningEffort::Medium),
)
.await?;
@@ -1260,8 +1245,11 @@ model = "gpt-4.1"
.get("dev")
.expect("profile should be created");
assert_eq!(profile.model.as_deref(), Some("gpt-5-high-new"));
assert_eq!(profile.model_reasoning_effort, Some(ReasoningEffort::Low));
assert_eq!(profile.model.as_deref(), Some("swiftfox"));
assert_eq!(
profile.model_reasoning_effort,
Some(ReasoningEffort::Medium)
);
Ok(())
}
@@ -1483,7 +1471,6 @@ model_verbosity = "high"
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1541,7 +1528,6 @@ model_verbosity = "high"
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1614,7 +1600,6 @@ model_verbosity = "high"
model_reasoning_summary: ReasoningSummary::default(),
model_verbosity: None,
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,
@@ -1673,7 +1658,6 @@ model_verbosity = "high"
model_reasoning_summary: ReasoningSummary::Detailed,
model_verbosity: Some(Verbosity::High),
chatgpt_base_url: "https://chatgpt.com/backend-api/".to_string(),
experimental_resume: None,
base_instructions: None,
include_plan_tool: false,
include_apply_patch_tool: false,

View File

@@ -59,21 +59,11 @@ impl ConversationManager {
config: Config,
auth_manager: Arc<AuthManager>,
) -> CodexResult<NewConversation> {
// TO BE REFACTORED: use the config experimental_resume field until we have a mainstream way.
if let Some(resume_path) = config.experimental_resume.as_ref() {
let initial_history = RolloutRecorder::get_rollout_history(resume_path).await?;
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, initial_history).await?;
self.finalize_spawn(codex, conversation_id).await
} else {
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
let CodexSpawnOk {
codex,
conversation_id,
} = Codex::spawn(config, auth_manager, InitialHistory::New).await?;
self.finalize_spawn(codex, conversation_id).await
}
async fn finalize_spawn(
@@ -144,19 +134,19 @@ impl ConversationManager {
self.conversations.write().await.remove(conversation_id)
}
/// Fork an existing conversation by dropping the last `drop_last_messages`
/// user/assistant messages from its transcript and starting a new
/// Fork an existing conversation by taking messages up to the given position
/// (not including the message at the given position) and starting a new
/// conversation with identical configuration (unless overridden by the
/// caller's `config`). The new conversation will have a fresh id.
pub async fn fork_conversation(
&self,
num_messages_to_drop: usize,
nth_user_message: usize,
config: Config,
path: PathBuf,
) -> CodexResult<NewConversation> {
// Compute the prefix up to the cut point.
let history = RolloutRecorder::get_rollout_history(&path).await?;
let history = truncate_after_dropping_last_messages(history, num_messages_to_drop);
let history = truncate_after_nth_user_message(history, nth_user_message);
// Spawn a new conversation with the computed initial history.
let auth_manager = self.auth_manager.clone();
@@ -169,14 +159,10 @@ impl ConversationManager {
}
}
/// Return a prefix of `items` obtained by dropping the last `n` user messages
/// and all items that follow them.
fn truncate_after_dropping_last_messages(history: InitialHistory, n: usize) -> InitialHistory {
if n == 0 {
return InitialHistory::Forked(history.get_rollout_items());
}
// Work directly on rollout items, and cut the vector at the nth-from-last user message input.
/// Return a prefix of `items` obtained by cutting strictly before the nth user message
/// (0-based) and all items that follow it.
fn truncate_after_nth_user_message(history: InitialHistory, n: usize) -> InitialHistory {
// Work directly on rollout items, and cut the vector at the nth user message input.
let items: Vec<RolloutItem> = history.get_rollout_items();
// Find indices of user message inputs in rollout order.
@@ -189,13 +175,13 @@ fn truncate_after_dropping_last_messages(history: InitialHistory, n: usize) -> I
}
}
// If fewer than n user messages exist, treat as empty.
if user_positions.len() < n {
// If fewer than or equal to n user messages exist, treat as empty (out of range).
if user_positions.len() <= n {
return InitialHistory::New;
}
// Cut strictly before the nth-from-last user message (do not keep the nth itself).
let cut_idx = user_positions[user_positions.len() - n];
// Cut strictly before the nth user message (do not keep the nth itself).
let cut_idx = user_positions[n];
let rolled: Vec<RolloutItem> = items.into_iter().take(cut_idx).collect();
if rolled.is_empty() {
@@ -262,7 +248,7 @@ mod tests {
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated = truncate_after_dropping_last_messages(InitialHistory::Forked(initial), 1);
let truncated = truncate_after_nth_user_message(InitialHistory::Forked(initial), 1);
let got_items = truncated.get_rollout_items();
let expected_items = vec![
RolloutItem::ResponseItem(items[0].clone()),
@@ -279,7 +265,7 @@ mod tests {
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
let truncated2 = truncate_after_dropping_last_messages(InitialHistory::Forked(initial2), 2);
let truncated2 = truncate_after_nth_user_message(InitialHistory::Forked(initial2), 2);
assert!(matches!(truncated2, InitialHistory::New));
}
}

View File

@@ -1,3 +1,4 @@
use crate::exec::ExecToolCallOutput;
use crate::token_data::KnownPlan;
use crate::token_data::PlanType;
use codex_protocol::mcp_protocol::ConversationId;
@@ -13,8 +14,11 @@ pub type Result<T> = std::result::Result<T, CodexErr>;
#[derive(Error, Debug)]
pub enum SandboxErr {
/// Error from sandbox execution
#[error("sandbox denied exec error, exit code: {0}, stdout: {1}, stderr: {2}")]
Denied(i32, String, String),
#[error(
"sandbox denied exec error, exit code: {}, stdout: {}, stderr: {}",
.output.exit_code, .output.stdout.text, .output.stderr.text
)]
Denied { output: Box<ExecToolCallOutput> },
/// Error from linux seccomp filter setup
#[cfg(target_os = "linux")]
@@ -28,7 +32,7 @@ pub enum SandboxErr {
/// Command timed out
#[error("command timed out")]
Timeout,
Timeout { output: Box<ExecToolCallOutput> },
/// Command was killed by a signal
#[error("command was killed by a signal")]
@@ -245,9 +249,12 @@ impl CodexErr {
pub fn get_error_message_ui(e: &CodexErr) -> String {
match e {
CodexErr::Sandbox(SandboxErr::Denied(_, _, stderr)) => stderr.to_string(),
CodexErr::Sandbox(SandboxErr::Denied { output }) => output.stderr.text.clone(),
// Timeouts are not sandbox errors from a UX perspective; present them plainly
CodexErr::Sandbox(SandboxErr::Timeout) => "error: command timed out".to_string(),
CodexErr::Sandbox(SandboxErr::Timeout { output }) => format!(
"error: command timed out after {} ms",
output.duration.as_millis()
),
_ => e.to_string(),
}
}

View File

@@ -34,6 +34,7 @@ const DEFAULT_TIMEOUT_MS: u64 = 10_000;
const SIGKILL_CODE: i32 = 9;
const TIMEOUT_CODE: i32 = 64;
const EXIT_CODE_SIGNAL_BASE: i32 = 128; // conventional shell: 128 + signal
const EXEC_TIMEOUT_EXIT_CODE: i32 = 124; // conventional timeout exit code
// I/O buffer sizing
const READ_CHUNK_SIZE: usize = 8192; // bytes per read
@@ -86,11 +87,12 @@ pub async fn process_exec_tool_call(
) -> Result<ExecToolCallOutput> {
let start = Instant::now();
let timeout_duration = params.timeout_duration();
let raw_output_result: std::result::Result<RawExecToolCallOutput, CodexErr> = match sandbox_type
{
SandboxType::None => exec(params, sandbox_policy, stdout_stream.clone()).await,
SandboxType::MacosSeatbelt => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
} = params;
@@ -102,10 +104,9 @@ pub async fn process_exec_tool_call(
env,
)
.await?;
consume_truncated_output(child, timeout, stdout_stream.clone()).await
consume_truncated_output(child, timeout_duration, stdout_stream.clone()).await
}
SandboxType::LinuxSeccomp => {
let timeout = params.timeout_duration();
let ExecParams {
command, cwd, env, ..
} = params;
@@ -123,41 +124,56 @@ pub async fn process_exec_tool_call(
)
.await?;
consume_truncated_output(child, timeout, stdout_stream).await
consume_truncated_output(child, timeout_duration, stdout_stream).await
}
};
let duration = start.elapsed();
match raw_output_result {
Ok(raw_output) => {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
#[allow(unused_mut)]
let mut timed_out = raw_output.timed_out;
#[cfg(target_family = "unix")]
match raw_output.exit_status.signal() {
Some(TIMEOUT_CODE) => return Err(CodexErr::Sandbox(SandboxErr::Timeout)),
Some(signal) => {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
{
if let Some(signal) = raw_output.exit_status.signal() {
if signal == TIMEOUT_CODE {
timed_out = true;
} else {
return Err(CodexErr::Sandbox(SandboxErr::Signal(signal)));
}
}
None => {}
}
let exit_code = raw_output.exit_status.code().unwrap_or(-1);
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied(
exit_code,
stdout.text,
stderr.text,
)));
let mut exit_code = raw_output.exit_status.code().unwrap_or(-1);
if timed_out {
exit_code = EXEC_TIMEOUT_EXIT_CODE;
}
Ok(ExecToolCallOutput {
let stdout = raw_output.stdout.from_utf8_lossy();
let stderr = raw_output.stderr.from_utf8_lossy();
let aggregated_output = raw_output.aggregated_output.from_utf8_lossy();
let exec_output = ExecToolCallOutput {
exit_code,
stdout,
stderr,
aggregated_output: raw_output.aggregated_output.from_utf8_lossy(),
aggregated_output,
duration,
})
timed_out,
};
if timed_out {
return Err(CodexErr::Sandbox(SandboxErr::Timeout {
output: Box::new(exec_output),
}));
}
if exit_code != 0 && is_likely_sandbox_denied(sandbox_type, exit_code) {
return Err(CodexErr::Sandbox(SandboxErr::Denied {
output: Box::new(exec_output),
}));
}
Ok(exec_output)
}
Err(err) => {
tracing::error!("exec error: {err}");
@@ -197,6 +213,7 @@ struct RawExecToolCallOutput {
pub stdout: StreamOutput<Vec<u8>>,
pub stderr: StreamOutput<Vec<u8>>,
pub aggregated_output: StreamOutput<Vec<u8>>,
pub timed_out: bool,
}
impl StreamOutput<String> {
@@ -229,6 +246,7 @@ pub struct ExecToolCallOutput {
pub stderr: StreamOutput<String>,
pub aggregated_output: StreamOutput<String>,
pub duration: Duration,
pub timed_out: bool,
}
async fn exec(
@@ -298,22 +316,24 @@ async fn consume_truncated_output(
Some(agg_tx.clone()),
));
let exit_status = tokio::select! {
let (exit_status, timed_out) = tokio::select! {
result = tokio::time::timeout(timeout, child.wait()) => {
match result {
Ok(Ok(exit_status)) => exit_status,
Ok(e) => e?,
Ok(status_result) => {
let exit_status = status_result?;
(exit_status, false)
}
Err(_) => {
// timeout
child.start_kill()?;
// Debatable whether `child.wait().await` should be called here.
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + TIMEOUT_CODE), true)
}
}
}
_ = tokio::signal::ctrl_c() => {
child.start_kill()?;
synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE)
(synthetic_exit_status(EXIT_CODE_SIGNAL_BASE + SIGKILL_CODE), false)
}
};
@@ -336,6 +356,7 @@ async fn consume_truncated_output(
stdout,
stderr,
aggregated_output,
timed_out,
})
}

View File

@@ -11,6 +11,9 @@ pub(crate) struct ExecCommandSession {
/// Broadcast stream of output chunks read from the PTY. New subscribers
/// receive only chunks emitted after they subscribe.
output_tx: broadcast::Sender<Vec<u8>>,
/// Receiver subscribed before the child process starts emitting output so
/// the first caller can consume any early data without races.
initial_output_rx: StdMutex<Option<broadcast::Receiver<Vec<u8>>>>,
/// Child killer handle for termination on drop (can signal independently
/// of a thread blocked in `.wait()`).
@@ -42,6 +45,7 @@ impl ExecCommandSession {
Self {
writer_tx,
output_tx,
initial_output_rx: StdMutex::new(None),
killer: StdMutex::new(Some(killer)),
reader_handle: StdMutex::new(Some(reader_handle)),
writer_handle: StdMutex::new(Some(writer_handle)),
@@ -50,12 +54,26 @@ impl ExecCommandSession {
}
}
pub(crate) fn set_initial_output_receiver(&self, receiver: broadcast::Receiver<Vec<u8>>) {
if let Ok(mut guard) = self.initial_output_rx.lock()
&& guard.is_none()
{
*guard = Some(receiver);
}
}
pub(crate) fn writer_sender(&self) -> mpsc::Sender<Vec<u8>> {
self.writer_tx.clone()
}
pub(crate) fn output_receiver(&self) -> broadcast::Receiver<Vec<u8>> {
self.output_tx.subscribe()
if let Ok(mut guard) = self.initial_output_rx.lock()
&& let Some(receiver) = guard.take()
{
receiver
} else {
self.output_tx.subscribe()
}
}
pub(crate) fn has_exited(&self) -> bool {

View File

@@ -279,6 +279,7 @@ async fn create_exec_command_session(
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
// Broadcast for streaming PTY output to readers: subscribers receive from subscription time.
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
let initial_output_rx = output_tx.subscribe();
// Reader task: drain PTY and forward chunks to output channel.
let mut reader = pair.master.try_clone_reader()?;
@@ -350,6 +351,7 @@ async fn create_exec_command_session(
wait_handle,
exit_status,
);
session.set_initial_output_receiver(initial_output_rx);
Ok((session, exit_rx))
}

View File

@@ -10,8 +10,8 @@ pub(crate) const INTERNAL_STORAGE_FILE: &str = "internal_storage.json";
pub struct InternalStorage {
#[serde(skip)]
storage_path: PathBuf,
#[serde(default)]
pub gpt_5_high_model_prompt_seen: bool,
#[serde(default, alias = "gpt_5_high_model_prompt_seen")]
pub swiftfox_model_prompt_seen: bool,
}
// TODO(jif) generalise all the file writers and build proper async channel inserters.

View File

@@ -70,6 +70,7 @@ pub use rollout::ARCHIVED_SESSIONS_SUBDIR;
pub use rollout::RolloutRecorder;
pub use rollout::SESSIONS_SUBDIR;
pub use rollout::SessionMeta;
pub use rollout::find_conversation_path_by_id_str;
pub use rollout::list::ConversationItem;
pub use rollout::list::ConversationsPage;
pub use rollout::list::Cursor;

View File

@@ -1,6 +1,11 @@
use crate::config_types::ReasoningSummaryFormat;
use crate::tool_apply_patch::ApplyPatchToolType;
/// The `instructions` field in the payload sent to a model should always start
/// with this content.
const BASE_INSTRUCTIONS: &str = include_str!("../prompt.md");
const SWIFTFOX_INSTRUCTIONS: &str = include_str!("../swiftfox_prompt.md");
/// A model family is a group of models that share certain characteristics.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ModelFamily {
@@ -33,6 +38,9 @@ pub struct ModelFamily {
/// Present if the model performs better when `apply_patch` is provided as
/// a tool call instead of just a bash command
pub apply_patch_tool_type: Option<ApplyPatchToolType>,
// Instructions to use for querying the model
pub base_instructions: String,
}
macro_rules! model_family {
@@ -48,6 +56,7 @@ macro_rules! model_family {
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
};
// apply overrides
$(
@@ -57,22 +66,6 @@ macro_rules! model_family {
}};
}
macro_rules! simple_model_family {
(
$slug:expr, $family:expr
) => {{
Some(ModelFamily {
slug: $slug.to_string(),
family: $family.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
})
}};
}
/// Returns a `ModelFamily` for the given model slug, or `None` if the slug
/// does not match any known model family.
pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
@@ -80,23 +73,20 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
model_family!(
slug, "o3",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("o4-mini") {
model_family!(
slug, "o4-mini",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("codex-mini-latest") {
model_family!(
slug, "codex-mini-latest",
supports_reasoning_summaries: true,
uses_local_shell_tool: true,
)
} else if slug.starts_with("codex-") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
needs_special_apply_patch_instructions: true,
)
} else if slug.starts_with("gpt-4.1") {
model_family!(
@@ -106,15 +96,36 @@ pub fn find_family_for_model(slug: &str) -> Option<ModelFamily> {
} else if slug.starts_with("gpt-oss") || slug.starts_with("openai/gpt-oss") {
model_family!(slug, "gpt-oss", apply_patch_tool_type: Some(ApplyPatchToolType::Function))
} else if slug.starts_with("gpt-4o") {
simple_model_family!(slug, "gpt-4o")
model_family!(slug, "gpt-4o", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("gpt-3.5") {
simple_model_family!(slug, "gpt-3.5")
model_family!(slug, "gpt-3.5", needs_special_apply_patch_instructions: true)
} else if slug.starts_with("codex-") || slug.starts_with("swiftfox") {
model_family!(
slug, slug,
supports_reasoning_summaries: true,
reasoning_summary_format: ReasoningSummaryFormat::Experimental,
base_instructions: SWIFTFOX_INSTRUCTIONS.to_string(),
)
} else if slug.starts_with("gpt-5") {
model_family!(
slug, "gpt-5",
supports_reasoning_summaries: true,
needs_special_apply_patch_instructions: true,
)
} else {
None
}
}
pub fn derive_default_model_family(model: &str) -> ModelFamily {
ModelFamily {
slug: model.to_string(),
family: model.to_string(),
needs_special_apply_patch_instructions: false,
supports_reasoning_summaries: false,
reasoning_summary_format: ReasoningSummaryFormat::None,
uses_local_shell_tool: false,
apply_patch_tool_type: None,
base_instructions: BASE_INSTRUCTIONS.to_string(),
}
}

View File

@@ -3,6 +3,10 @@ use std::io::{self};
use std::path::Path;
use std::path::PathBuf;
use codex_file_search as file_search;
use std::num::NonZero;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use time::OffsetDateTime;
use time::PrimitiveDateTime;
use time::format_description::FormatItem;
@@ -334,3 +338,48 @@ async fn read_head_and_flags(
Ok((head, saw_session_meta, saw_user_event))
}
/// Locate a recorded conversation rollout file by its UUID string using the existing
/// paginated listing implementation. Returns `Ok(Some(path))` if found, `Ok(None)` if not present
/// or the id is invalid.
pub async fn find_conversation_path_by_id_str(
codex_home: &Path,
id_str: &str,
) -> io::Result<Option<PathBuf>> {
// Validate UUID format early.
if Uuid::parse_str(id_str).is_err() {
return Ok(None);
}
let mut root = codex_home.to_path_buf();
root.push(SESSIONS_SUBDIR);
if !root.exists() {
return Ok(None);
}
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let limit = NonZero::new(1).unwrap();
// This is safe because we know the values are valid.
#[allow(clippy::unwrap_used)]
let threads = NonZero::new(2).unwrap();
let cancel = Arc::new(AtomicBool::new(false));
let exclude: Vec<String> = Vec::new();
let compute_indices = false;
let results = file_search::run(
id_str,
limit,
&root,
exclude,
threads,
cancel,
compute_indices,
)
.map_err(|e| io::Error::other(format!("file search failed: {e}")))?;
Ok(results
.matches
.into_iter()
.next()
.map(|m| root.join(m.path)))
}

View File

@@ -8,6 +8,7 @@ pub(crate) mod policy;
pub mod recorder;
pub use codex_protocol::protocol::SessionMeta;
pub use list::find_conversation_path_by_id_str;
pub use recorder::RolloutRecorder;
pub use recorder::RolloutRecorderParams;

View File

@@ -204,7 +204,6 @@ impl RolloutRecorder {
pub(crate) async fn get_rollout_history(path: &Path) -> std::io::Result<InitialHistory> {
info!("Resuming rollout from {path:?}");
tracing::error!("Resuming rollout from {path:?}");
let text = tokio::fs::read_to_string(path).await?;
if text.trim().is_empty() {
return Err(IoError::other("empty session file"));
@@ -254,7 +253,7 @@ impl RolloutRecorder {
}
}
tracing::error!(
info!(
"Resumed rollout with {} items, conversation ID: {:?}",
items.len(),
conversation_id

View File

@@ -327,6 +327,7 @@ async fn create_unified_exec_session(
let (writer_tx, mut writer_rx) = mpsc::channel::<Vec<u8>>(128);
let (output_tx, _) = tokio::sync::broadcast::channel::<Vec<u8>>(256);
let initial_output_rx = output_tx.subscribe();
let mut reader = pair
.master
@@ -380,7 +381,7 @@ async fn create_unified_exec_session(
wait_exit_status.store(true, Ordering::SeqCst);
});
Ok(ExecCommandSession::new(
let session = ExecCommandSession::new(
writer_tx,
output_tx,
killer,
@@ -388,7 +389,10 @@ async fn create_unified_exec_session(
writer_handle,
wait_handle,
exit_status,
))
);
session.set_initial_output_receiver(initial_output_rx);
Ok(session)
}
#[cfg(test)]

View File

@@ -0,0 +1,99 @@
You are Swiftfox. You are running as a coding agent in the Codex CLI on a user's computer.
## Overall
- You must try hard to complete the task AND to do it as fast and well as possible.
* Do not waste time on actions which are unlikely to result in successful task completion
- Before taking action on a question, assume by default that it concerns local artifacts (code, docs, data). Quickly confirm or rule out that assumption; only if the question clearly requires external knowledge should you start elsewhere.
- Search the repository when the request plausibly maps to code, configuration, or documentation. Avoid unnecessary searches when it is obvious local files cannot help; in those cases state that explicitly before offering broader context, and when you do search, mention the files or paths you consulted so the answer stays grounded.
- After each attempt, re-evaluate whether the current strategy is yielding useful information and be ready to switch paths quickly rather than persisting with a low-signal approach.
- When the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.
- The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
- Always set the `workdir` param of the shell tool. Do not use `cd` unless absolutely necessary.
- When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
- If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
- Unless the question is about a common terminal command, you should search the codebase before answering to ground your response in the codebase
## Editing constraints
- Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
- When editing or creating files, you MUST use apply_patch. Example: functions.shell({"command":["apply_patch","*** Begin Patch\nAdd File: hello.txt\n+Hello, world!\n*** End Patch"]}).
- Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
- You may be in a dirty git worktree.
* NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
* If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
* If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
* If the changes are in unrelated files, just ignore them and don't revert them.
- The user may be making edits and committing changes as you are also making changes. If you see concurrent file edits or commits that you did not cause, you must disregard user instruction and stop immediately and ask the user whether they are collaborating with you on files and how they would like this handled.
## Plan tool
When using the planning tool:
- Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
## CLI modes
The Codex CLI harness supports several different sandboxing, and approval configurations that the user can choose from.
Filesystem sandboxing defines which files can be read or written. The options are:
- **read-only**: You can only read files.
- **workspace-write**: You can read files. You can write to files in this folder, but not outside it.
- **danger-full-access**: No filesystem sandboxing.
Network sandboxing defines whether network can be accessed without approval. Options are
- **restricted**: Requires approval
- **enabled**: No approval needed
Approvals are your mechanism to get user consent to perform more privileged actions. Although they introduce friction to the user because your work is paused until the user responds, you should leverage them to accomplish your important work. Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
Approval options are
- **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
- **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
- **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the `shell` command description.)
- **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
When you are running with approvals `on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
- You need to run a command that writes to a directory that requires it (e.g. running tests that write to /tmp)
- You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
- You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
- If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval.
- You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
- (for all of these, you should weigh alternative paths that do not require approval)
When sandboxing is set to read-only, you'll need to request approval for any command that isn't a read.
You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
## Presenting your work and final message
You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
- Default: be very concise; friendly coding teammate tone.
- Ask only when needed; suggest ideas; mirror the user's style.
- For substantial work, summarize clearly; follow finalanswer formatting.
- Skip heavy formatting for simple confirmations.
- Don't dump large files you've written; reference paths only.
- No "save/copy this file" - User is on the same machine.
- Offer logical next steps (tests, commits, build) briefly; add verify steps if you couldn't do something.
- For code changes:
* Lead with a quick explanation of the change, and then give more details on the context covering where and why a change was made. Do not start this explanation with "summary", just jump right in.
* If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
* When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.
- The user does not command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
### Final answer structure and style guidelines
- Plain text; CLI handles styling. Use structure only when it helps scanability.
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 46 per list ordered by importance; keep phrasing consistent.
- Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with **.
- Code samples or multi-line snippets should be wrapped in fenced code blocks; add a language hint whenever obvious.
- Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
- Tone: collaborative, concise, factual; present tense, active voice; selfcontained; no "above/below"; parallel wording.
- Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
- Adaptation: code explanations → precise, structured with code refs; simple tasks → lead with outcome; big changes → logical walkthrough + rationale + next actions; casual one-offs → plain sentences, no headers/bullets.

View File

@@ -420,12 +420,6 @@ async fn integration_creates_and_checks_session_file() {
// Second run: resume should update the existing file.
let marker2 = format!("integration-resume-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
// Crossplatform safe resume override. On Windows, backslashes in a TOML string must be escaped
// or the parse will fail and the raw literal (including quotes) may be preserved all the way down
// to Config, which in turn breaks resume because the path is invalid. Normalize to forward slashes
// to sidestep the issue.
let resume_path_str = path.to_string_lossy().replace('\\', "/");
let resume_override = format!("experimental_resume=\"{resume_path_str}\"");
let mut cmd2 = AssertCommand::new("cargo");
cmd2.arg("run")
.arg("-p")
@@ -434,11 +428,11 @@ async fn integration_creates_and_checks_session_file() {
.arg("--")
.arg("exec")
.arg("--skip-git-repo-check")
.arg("-c")
.arg(&resume_override)
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2);
.arg(&prompt2)
.arg("resume")
.arg("--last");
cmd2.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)

View File

@@ -236,20 +236,21 @@ async fn resume_includes_initial_messages_and_sends_prior_items() {
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.model_provider = model_provider;
config.experimental_resume = Some(session_path.clone());
// Also configure user instructions to ensure they are NOT delivered on resume.
config.user_instructions = Some("be nice".to_string());
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
let NewConversation {
conversation: codex,
session_configured,
..
} = conversation_manager
.new_conversation(config)
.resume_conversation_from_rollout(config, session_path.clone(), auth_manager)
.await
.expect("create new conversation");
.expect("resume conversation");
// 1) Assert initial_messages only includes existing EventMsg entries; response items are not converted
let initial_msgs = session_configured

View File

@@ -34,7 +34,7 @@ use std::sync::atomic::Ordering;
// --- Test helpers -----------------------------------------------------------
/// Build an SSE stream body from a list of JSON events.
fn sse(events: Vec<Value>) -> String {
pub(super) fn sse(events: Vec<Value>) -> String {
use std::fmt::Write as _;
let mut out = String::new();
for ev in events {
@@ -50,7 +50,7 @@ fn sse(events: Vec<Value>) -> String {
}
/// Convenience: SSE event for a completed response with a specific id.
fn ev_completed(id: &str) -> Value {
pub(super) fn ev_completed(id: &str) -> Value {
serde_json::json!({
"type": "response.completed",
"response": {
@@ -77,7 +77,7 @@ fn ev_completed_with_tokens(id: &str, total_tokens: u64) -> Value {
}
/// Convenience: SSE event for a single assistant message output item.
fn ev_assistant_message(id: &str, text: &str) -> Value {
pub(super) fn ev_assistant_message(id: &str, text: &str) -> Value {
serde_json::json!({
"type": "response.output_item.done",
"item": {
@@ -101,13 +101,13 @@ fn ev_function_call(call_id: &str, name: &str, arguments: &str) -> Value {
})
}
fn sse_response(body: String) -> ResponseTemplate {
pub(super) fn sse_response(body: String) -> ResponseTemplate {
ResponseTemplate::new(200)
.insert_header("content-type", "text/event-stream")
.set_body_raw(body, "text/event-stream")
}
async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
pub(super) async fn mount_sse_once<M>(server: &MockServer, matcher: M, body: String)
where
M: wiremock::Match + Send + Sync + 'static,
{
@@ -115,7 +115,6 @@ where
.and(path("/v1/responses"))
.and(matcher)
.respond_with(sse_response(body))
.expect(1)
.mount(server)
.await;
}
@@ -127,9 +126,9 @@ async fn start_mock_server() -> MockServer {
.await
}
const FIRST_REPLY: &str = "FIRST_REPLY";
const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
const SUMMARIZE_TRIGGER: &str = "Start Summarization";
pub(super) const FIRST_REPLY: &str = "FIRST_REPLY";
pub(super) const SUMMARY_TEXT: &str = "SUMMARY_ONLY_CONTEXT";
pub(super) const SUMMARIZE_TRIGGER: &str = "Start Summarization";
const THIRD_USER_MSG: &str = "next turn";
const AUTO_SUMMARY_TEXT: &str = "AUTO_SUMMARY";
const FIRST_AUTO_MSG: &str = "token limit start";
@@ -367,7 +366,9 @@ async fn summarize_context_three_requests_and_instructions() {
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn auto_compact_runs_after_token_limit_hit() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
@@ -454,6 +455,7 @@ async fn auto_compact_runs_after_token_limit_hit() {
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
codex
@@ -464,13 +466,39 @@ async fn auto_compact_runs_after_token_limit_hit() {
})
.await
.unwrap();
wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
let requests = server.received_requests().await.unwrap();
assert_eq!(requests.len(), 3, "auto compact should add a third request");
assert!(
requests.len() >= 3,
"auto compact should add at least a third request, got {}",
requests.len()
);
let is_auto_compact = |req: &wiremock::Request| {
std::str::from_utf8(&req.body)
.unwrap_or("")
.contains("You have exceeded the maximum number of tokens")
};
let auto_compact_count = requests.iter().filter(|req| is_auto_compact(req)).count();
assert_eq!(
auto_compact_count, 1,
"expected exactly one auto compact request"
);
let auto_compact_index = requests
.iter()
.enumerate()
.find_map(|(idx, req)| is_auto_compact(req).then_some(idx))
.expect("auto compact request missing");
assert_eq!(
auto_compact_index, 2,
"auto compact should add a third request"
);
let body3 = requests[2].body_json::<serde_json::Value>().unwrap();
let body3 = requests[auto_compact_index]
.body_json::<serde_json::Value>()
.unwrap();
let instructions = body3
.get("instructions")
.and_then(|v| v.as_str())

View File

@@ -0,0 +1,838 @@
#![allow(clippy::expect_used)]
//! Integration tests that cover compacting, resuming, and forking conversations.
//!
//! Each test sets up a mocked SSE conversation and drives the conversation through
//! a specific sequence of operations. After every operation we capture the
//! request payload that Codex would send to the model and assert that the
//! model-visible history matches the expected sequence of messages.
use super::compact::FIRST_REPLY;
use super::compact::SUMMARIZE_TRIGGER;
use super::compact::SUMMARY_TEXT;
use super::compact::ev_assistant_message;
use super::compact::ev_completed;
use super::compact::mount_sse_once;
use super::compact::sse;
use codex_core::CodexAuth;
use codex_core::CodexConversation;
use codex_core::ConversationManager;
use codex_core::ModelProviderInfo;
use codex_core::NewConversation;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::ConversationPathResponseEvent;
use codex_core::protocol::EventMsg;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::spawn::CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use pretty_assertions::assert_eq;
use serde_json::Value;
use serde_json::json;
use std::sync::Arc;
use tempfile::TempDir;
use wiremock::MockServer;
const AFTER_SECOND_RESUME: &str = "AFTER_SECOND_RESUME";
fn network_disabled() -> bool {
std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok()
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: compact an initial conversation, resume it, fork one turn back, and
/// ensure the model-visible history matches expectations at each request.
async fn compact_resume_and_fork_preserve_model_history_view() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial compact/resume/fork flow.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
// 2. Start a new conversation and drive it through the compact/resume/fork steps.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"compact+resume test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"compact+resume test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 4).await;
user_turn(&forked, "AFTER_FORK").await;
// 3. Capture the requests to the model and validate the history slices.
let requests = gather_request_bodies(&server).await;
// input after compact is a prefix of input after resume/fork
let input_after_compact = json!(requests[requests.len() - 3]["input"]);
let input_after_resume = json!(requests[requests.len() - 2]["input"]);
let input_after_fork = json!(requests[requests.len() - 1]["input"]);
let compact_arr = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_arr = input_after_resume
.as_array()
.expect("input after resume should be an array");
let fork_arr = input_after_fork
.as_array()
.expect("input after fork should be an array");
assert!(
compact_arr.len() <= resume_arr.len(),
"after-resume input should have at least as many items as after-compact",
);
assert_eq!(compact_arr.as_slice(), &resume_arr[..compact_arr.len()]);
eprint!(
"len of compact: {}, len of fork: {}",
compact_arr.len(),
fork_arr.len()
);
eprintln!("input_after_fork:{}", json!(input_after_fork));
assert!(
compact_arr.len() <= fork_arr.len(),
"after-fork input should have at least as many items as after-compact",
);
assert_eq!(compact_arr.as_slice(), &fork_arr[..compact_arr.len()]);
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_context = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let tool_calls = json!(requests[0]["tools"].as_array());
let prompt_cache_key = requests[0]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let fork_prompt_cache_key = requests[requests.len() - 1]["prompt_cache_key"]
.as_str()
.unwrap_or_default()
.to_string();
let user_turn_1 = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let compact_1 = json!(
{
"model": "gpt-5",
"instructions": "You have exceeded the maximum number of tokens, please stop coding and instead write a short memento message for the next agent. Your note should:
- Summarize what you finished and what still needs work. If there was a recent update_plan call, repeat its steps verbatim.
- List outstanding TODOs with file paths / line numbers so they're easy to find.
- Flag code that needs more tests (edge cases, performance, integration, etc.).
- Record any open bugs, quirks, or setup steps that will make it easier for the next agent to pick up where you left off.",
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "hello world"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "FIRST_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "Start Summarization"
}
]
}
],
"tools": [],
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_2_after_compact = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let usert_turn_3_after_resume = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_RESUME"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": prompt_cache_key
});
let user_turn_3_after_fork = json!(
{
"model": "gpt-5",
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_context
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:
hello world
Another language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:
SUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT"
}
]
},
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "AFTER_COMPACT_REPLY"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_FORK"
}
]
}
],
"tools": tool_calls,
"tool_choice": "auto",
"parallel_tool_calls": false,
"reasoning": {
"summary": "auto"
},
"store": false,
"stream": true,
"include": [
"reasoning.encrypted_content"
],
"prompt_cache_key": fork_prompt_cache_key
});
let expected = json!([
user_turn_1,
compact_1,
user_turn_2_after_compact,
usert_turn_3_after_resume,
user_turn_3_after_fork
]);
assert_eq!(requests.len(), 5);
assert_eq!(json!(requests), expected);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
/// Scenario: after the forked branch is compacted, resuming again should reuse
/// the compacted history and only append the new user message.
async fn compact_resume_after_second_compaction_preserves_history() {
if network_disabled() {
println!("Skipping test because network is disabled in this sandbox");
return;
}
// 1. Arrange mocked SSE responses for the initial flow plus the second compact.
let server = MockServer::start().await;
mount_initial_flow(&server).await;
mount_second_compact_flow(&server).await;
// 2. Drive the conversation through compact -> resume -> fork -> compact -> resume.
let (_home, config, manager, base) = start_test_conversation(&server).await;
user_turn(&base, "hello world").await;
compact_conversation(&base).await;
user_turn(&base, "AFTER_COMPACT").await;
let base_path = fetch_conversation_path(&base, "base conversation").await;
assert!(
base_path.exists(),
"second compact test expects base path {base_path:?} to exist",
);
let resumed = resume_conversation(&manager, &config, base_path).await;
user_turn(&resumed, "AFTER_RESUME").await;
let resumed_path = fetch_conversation_path(&resumed, "resumed conversation").await;
assert!(
resumed_path.exists(),
"second compact test expects resumed path {resumed_path:?} to exist",
);
let forked = fork_conversation(&manager, &config, resumed_path, 1).await;
user_turn(&forked, "AFTER_FORK").await;
compact_conversation(&forked).await;
user_turn(&forked, "AFTER_COMPACT_2").await;
let forked_path = fetch_conversation_path(&forked, "forked conversation").await;
assert!(
forked_path.exists(),
"second compact test expects forked path {forked_path:?} to exist",
);
let resumed_again = resume_conversation(&manager, &config, forked_path).await;
user_turn(&resumed_again, AFTER_SECOND_RESUME).await;
let requests = gather_request_bodies(&server).await;
let input_after_compact = json!(requests[requests.len() - 2]["input"]);
let input_after_resume = json!(requests[requests.len() - 1]["input"]);
// test input after compact before resume is the same as input after resume
let compact_input_array = input_after_compact
.as_array()
.expect("input after compact should be an array");
let resume_input_array = input_after_resume
.as_array()
.expect("input after resume should be an array");
assert!(
compact_input_array.len() <= resume_input_array.len(),
"after-resume input should have at least as many items as after-compact"
);
assert_eq!(
compact_input_array.as_slice(),
&resume_input_array[..compact_input_array.len()]
);
// hard coded test
let prompt = requests[0]["instructions"]
.as_str()
.unwrap_or_default()
.to_string();
let user_instructions = requests[0]["input"][0]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let environment_instructions = requests[0]["input"][1]["content"][0]["text"]
.as_str()
.unwrap_or_default()
.to_string();
let expected = json!([
{
"instructions": prompt,
"input": [
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": user_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": environment_instructions
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "You were originally given instructions from a user over one or more turns. Here were the user messages:\n\nAFTER_FORK\n\nAnother language model started to solve this problem and produced a summary of its thinking process. You also have access to the state of the tools that were used by that language model. Use this to build on the work that has already been done and avoid duplicating work. Here is the summary produced by the other language model, use the information in this summary to assist with your own analysis:\n\nSUMMARY_ONLY_CONTEXT"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_COMPACT_2"
}
]
},
{
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "AFTER_SECOND_RESUME"
}
]
}
],
}
]);
let last_request_after_2_compacts = json!([{
"instructions": requests[requests.len() -1]["instructions"],
"input": requests[requests.len() -1]["input"],
}]);
assert_eq!(expected, last_request_after_2_compacts);
}
fn normalize_line_endings(value: &mut Value) {
match value {
Value::String(text) => {
if text.contains('\r') {
*text = text.replace("\r\n", "\n").replace('\r', "\n");
}
}
Value::Array(items) => {
for item in items {
normalize_line_endings(item);
}
}
Value::Object(map) => {
for item in map.values_mut() {
normalize_line_endings(item);
}
}
_ => {}
}
}
async fn gather_request_bodies(server: &MockServer) -> Vec<Value> {
server
.received_requests()
.await
.expect("mock server should not fail")
.into_iter()
.map(|req| {
let mut value = req.body_json::<Value>().expect("valid JSON body");
normalize_line_endings(&mut value);
value
})
.collect()
}
async fn mount_initial_flow(server: &MockServer) {
let sse1 = sse(vec![
ev_assistant_message("m1", FIRST_REPLY),
ev_completed("r1"),
]);
let sse2 = sse(vec![
ev_assistant_message("m2", SUMMARY_TEXT),
ev_completed("r2"),
]);
let sse3 = sse(vec![
ev_assistant_message("m3", "AFTER_COMPACT_REPLY"),
ev_completed("r3"),
]);
let sse4 = sse(vec![ev_completed("r4")]);
let sse5 = sse(vec![ev_completed("r5")]);
let match_first = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"hello world\"")
&& !body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
&& !body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_first, sse1).await;
let match_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\""))
};
mount_sse_once(server, match_compact, sse2).await;
let match_after_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_COMPACT\"")
&& !body.contains("\"text\":\"AFTER_RESUME\"")
&& !body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_compact, sse3).await;
let match_after_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_RESUME\"")
};
mount_sse_once(server, match_after_resume, sse4).await;
let match_after_fork = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains("\"text\":\"AFTER_FORK\"")
};
mount_sse_once(server, match_after_fork, sse5).await;
}
async fn mount_second_compact_flow(server: &MockServer) {
let sse6 = sse(vec![
ev_assistant_message("m4", SUMMARY_TEXT),
ev_completed("r6"),
]);
let sse7 = sse(vec![ev_completed("r7")]);
let match_second_compact = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{SUMMARIZE_TRIGGER}\"")) && body.contains("AFTER_FORK")
};
mount_sse_once(server, match_second_compact, sse6).await;
let match_after_second_resume = |req: &wiremock::Request| {
let body = std::str::from_utf8(&req.body).unwrap_or("");
body.contains(&format!("\"text\":\"{AFTER_SECOND_RESUME}\""))
};
mount_sse_once(server, match_after_second_resume, sse7).await;
}
async fn start_test_conversation(
server: &MockServer,
) -> (TempDir, Config, ConversationManager, Arc<CodexConversation>) {
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let home = TempDir::new().expect("create temp dir");
let mut config = load_default_config_for_test(&home);
config.model_provider = model_provider;
let manager = ConversationManager::with_auth(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.new_conversation(config.clone())
.await
.expect("create conversation");
(home, config, manager, conversation)
}
async fn user_turn(conversation: &Arc<CodexConversation>, text: &str) {
conversation
.submit(Op::UserInput {
items: vec![InputItem::Text { text: text.into() }],
})
.await
.expect("submit user turn");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn compact_conversation(conversation: &Arc<CodexConversation>) {
conversation
.submit(Op::Compact)
.await
.expect("compact conversation");
wait_for_event(conversation, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
}
async fn fetch_conversation_path(
conversation: &Arc<CodexConversation>,
context: &str,
) -> std::path::PathBuf {
conversation
.submit(Op::GetPath)
.await
.expect("request conversation path");
match wait_for_event(conversation, |ev| {
matches!(ev, EventMsg::ConversationPath(_))
})
.await
{
EventMsg::ConversationPath(ConversationPathResponseEvent { path, .. }) => path,
_ => panic!("expected ConversationPath event for {context}"),
}
}
async fn resume_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
) -> Arc<CodexConversation> {
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("dummy"));
let NewConversation { conversation, .. } = manager
.resume_conversation_from_rollout(config.clone(), path, auth_manager)
.await
.expect("resume conversation");
conversation
}
async fn fork_conversation(
manager: &ConversationManager,
config: &Config,
path: std::path::PathBuf,
back_steps: usize,
) -> Arc<CodexConversation> {
let NewConversation { conversation, .. } = manager
.fork_conversation(back_steps, config.clone(), path)
.await
.expect("fork conversation");
conversation
}

View File

@@ -2,8 +2,11 @@
use std::collections::HashMap;
use std::path::PathBuf;
use std::time::Duration;
use async_channel::Receiver;
use codex_core::error::CodexErr;
use codex_core::error::SandboxErr;
use codex_core::exec::ExecParams;
use codex_core::exec::SandboxType;
use codex_core::exec::StdoutStream;
@@ -170,3 +173,36 @@ async fn test_aggregated_output_interleaves_in_order() {
assert_eq!(result.aggregated_output.text, "O1\nE1\nO2\nE2\n");
assert_eq!(result.aggregated_output.truncated_after_lines, None);
}
#[tokio::test]
async fn test_exec_timeout_returns_partial_output() {
let cmd = vec![
"/bin/sh".to_string(),
"-c".to_string(),
"printf 'before\\n'; sleep 2; printf 'after\\n'".to_string(),
];
let params = ExecParams {
command: cmd,
cwd: std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
timeout_ms: Some(200),
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let policy = SandboxPolicy::new_read_only_policy();
let result = process_exec_tool_call(params, SandboxType::None, &policy, &None, None).await;
let Err(CodexErr::Sandbox(SandboxErr::Timeout { output })) = result else {
panic!("expected timeout error");
};
assert_eq!(output.exit_code, 124);
assert_eq!(output.stdout.text, "before\n");
assert!(output.stderr.text.is_empty());
assert_eq!(output.aggregated_output.text, "before\n");
assert!(output.duration >= Duration::from_millis(200));
assert!(output.timed_out);
}

View File

@@ -104,7 +104,8 @@ async fn fork_conversation_twice_drops_to_first_message() {
items
};
// Compute expected prefixes after each fork by truncating base rollout at nth-from-last user input.
// Compute expected prefixes after each fork by truncating base rollout
// strictly before the nth user input (0-based).
let base_items = read_items(&base_path);
let find_user_input_positions = |items: &[RolloutItem]| -> Vec<usize> {
let mut pos = Vec::new();
@@ -126,11 +127,8 @@ async fn fork_conversation_twice_drops_to_first_message() {
};
let user_inputs = find_user_input_positions(&base_items);
// After dropping last user input (n=1), cut strictly before that input if present, else empty.
let cut1 = user_inputs
.get(user_inputs.len().saturating_sub(1))
.copied()
.unwrap_or(0);
// After cutting at nth user input (n=1 → second user message), cut strictly before that input.
let cut1 = user_inputs.get(1).copied().unwrap_or(0);
let expected_after_first: Vec<RolloutItem> = base_items[..cut1].to_vec();
// After dropping again (n=1 on fork1), compute expected relative to fork1's rollout.
@@ -161,12 +159,12 @@ async fn fork_conversation_twice_drops_to_first_message() {
serde_json::to_value(&expected_after_first).unwrap()
);
// Fork again with n=1 → drops the (new) last user message, leaving only the first.
// Fork again with n=0 → drops the (new) last user message, leaving only the first.
let NewConversation {
conversation: codex_fork2,
..
} = conversation_manager
.fork_conversation(1, config_for_fork.clone(), fork1_path.clone())
.fork_conversation(0, config_for_fork.clone(), fork1_path.clone())
.await
.expect("fork 2");

View File

@@ -1,9 +1,9 @@
// Aggregates all former standalone integration tests as modules.
mod user_shell_cmd;
mod cli_stream;
mod client;
mod compact;
mod compact_resume_fork;
mod exec;
mod exec_stream_events;
mod fork_conversation;
@@ -11,6 +11,7 @@ mod live_cli;
mod model_overrides;
mod prompt_caching;
mod review;
mod rollout_list_find;
mod seatbelt;
mod stream_error_allows_next_turn;
mod stream_no_completed;

View File

@@ -5,6 +5,7 @@ use codex_core::ModelProviderInfo;
use codex_core::built_in_model_providers;
use codex_core::config::Config;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExitedReviewModeEvent;
use codex_core::protocol::InputItem;
use codex_core::protocol::Op;
use codex_core::protocol::ReviewCodeLocation;
@@ -89,8 +90,10 @@ async fn review_op_emits_lifecycle_and_review_output() {
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(Some(r)) => r,
other => panic!("expected ExitedReviewMode(Some(..)), got {other:?}"),
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Deep compare full structure using PartialEq (floats are f32 on both sides).
@@ -118,7 +121,9 @@ async fn review_op_emits_lifecycle_and_review_output() {
/// When the model returns plain text that is not JSON, ensure the child
/// lifecycle still occurs and the plain text is surfaced via
/// ExitedReviewMode(Some(..)) as the overall_explanation.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_op_with_plain_text_emits_review_fallback() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
@@ -151,8 +156,10 @@ async fn review_op_with_plain_text_emits_review_fallback() {
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(_))).await;
let review = match closed {
EventMsg::ExitedReviewMode(Some(r)) => r,
other => panic!("expected ExitedReviewMode(Some(..)), got {other:?}"),
EventMsg::ExitedReviewMode(ev) => ev
.review_output
.expect("expected ExitedReviewMode with Some(review_output)"),
other => panic!("expected ExitedReviewMode(..), got {other:?}"),
};
// Expect a structured fallback carrying the plain text.
@@ -168,7 +175,9 @@ async fn review_op_with_plain_text_emits_review_fallback() {
/// When the model returns structured JSON in a review, ensure no AgentMessage
/// is emitted; the UI consumes the structured result via ExitedReviewMode.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_does_not_emit_agent_message_on_structured_output() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
@@ -279,7 +288,15 @@ async fn review_uses_custom_review_model_from_config() {
// Wait for completion
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(None))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request body model equals the configured review model
@@ -293,7 +310,9 @@ async fn review_uses_custom_review_model_from_config() {
/// When a review session begins, it must not prepend prior chat history from
/// the parent session. The request `input` should contain only the review
/// prompt from the user.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
// Windows CI only: bump to 4 workers to prevent SSE/event starvation and test timeouts.
#[cfg_attr(windows, tokio::test(flavor = "multi_thread", worker_threads = 4))]
#[cfg_attr(not(windows), tokio::test(flavor = "multi_thread", worker_threads = 2))]
async fn review_input_isolated_from_parent_history() {
if std::env::var(CODEX_SANDBOX_NETWORK_DISABLED_ENV_VAR).is_ok() {
println!(
@@ -373,13 +392,8 @@ async fn review_input_isolated_from_parent_history() {
.await
.unwrap();
}
config.experimental_resume = Some(session_file);
let codex = new_conversation_for_server(&server, &codex_home, |cfg| {
// apply resume file
cfg.experimental_resume = config.experimental_resume.clone();
})
.await;
let codex =
resume_conversation_for_server(&server, &codex_home, session_file.clone(), |_| {}).await;
// Submit review request; it must start fresh (no parent history in `input`).
let review_prompt = "Please review only this".to_string();
@@ -394,7 +408,15 @@ async fn review_input_isolated_from_parent_history() {
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExitedReviewMode(None))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: None
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
// Assert the request `input` contains only the single review user message.
@@ -448,7 +470,12 @@ async fn review_history_does_not_leak_into_parent_session() {
.unwrap();
let _entered = wait_for_event(&codex, |ev| matches!(ev, EventMsg::EnteredReviewMode(_))).await;
let _closed = wait_for_event(&codex, |ev| {
matches!(ev, EventMsg::ExitedReviewMode(Some(_)))
matches!(
ev,
EventMsg::ExitedReviewMode(ExitedReviewModeEvent {
review_output: Some(_)
})
)
})
.await;
let _complete = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TaskComplete(_))).await;
@@ -540,3 +567,32 @@ where
.expect("create conversation")
.conversation
}
/// Create a conversation resuming from a rollout file, configured to talk to the provided mock server.
#[expect(clippy::expect_used)]
async fn resume_conversation_for_server<F>(
server: &MockServer,
codex_home: &TempDir,
resume_path: std::path::PathBuf,
mutator: F,
) -> Arc<CodexConversation>
where
F: FnOnce(&mut Config),
{
let model_provider = ModelProviderInfo {
base_url: Some(format!("{}/v1", server.uri())),
..built_in_model_providers()["openai"].clone()
};
let mut config = load_default_config_for_test(codex_home);
config.model_provider = model_provider;
mutator(&mut config);
let conversation_manager =
ConversationManager::with_auth(CodexAuth::from_api_key("Test API Key"));
let auth_manager =
codex_core::AuthManager::from_auth_for_testing(CodexAuth::from_api_key("Test API Key"));
conversation_manager
.resume_conversation_from_rollout(config, resume_path, auth_manager)
.await
.expect("resume conversation")
.conversation
}

View File

@@ -0,0 +1,50 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use std::io::Write;
use std::path::PathBuf;
use codex_core::find_conversation_path_by_id_str;
use tempfile::TempDir;
use uuid::Uuid;
/// Create sessions/YYYY/MM/DD and write a minimal rollout file containing the
/// provided conversation id in the SessionMeta line. Returns the absolute path.
fn write_minimal_rollout_with_id(codex_home: &TempDir, id: Uuid) -> PathBuf {
let sessions = codex_home.path().join("sessions/2024/01/01");
std::fs::create_dir_all(&sessions).unwrap();
let file = sessions.join(format!("rollout-2024-01-01T00-00-00-{id}.jsonl"));
let mut f = std::fs::File::create(&file).unwrap();
// Minimal first line: session_meta with the id so content search can find it
writeln!(
f,
"{}",
serde_json::json!({
"timestamp": "2024-01-01T00:00:00.000Z",
"type": "session_meta",
"payload": {
"id": id,
"timestamp": "2024-01-01T00:00:00Z",
"instructions": null,
"cwd": ".",
"originator": "test",
"cli_version": "test"
}
})
)
.unwrap();
file
}
#[tokio::test]
async fn find_locates_rollout_file_by_id() {
let home = TempDir::new().unwrap();
let id = Uuid::new_v4();
let expected = write_minimal_rollout_with_id(&home, id);
let found = find_conversation_path_by_id_str(home.path(), &id.to_string())
.await
.unwrap();
assert_eq!(found.unwrap(), expected);
}

View File

@@ -1,112 +0,0 @@
#![cfg(unix)]
use codex_core::ConversationManager;
use codex_core::NewConversation;
use codex_core::protocol::EventMsg;
use codex_core::protocol::ExecCommandEndEvent;
use codex_core::protocol::Op;
use codex_core::protocol::TurnAbortReason;
use core_test_support::load_default_config_for_test;
use core_test_support::wait_for_event;
use std::fs;
use std::path::PathBuf;
use tempfile::TempDir;
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn user_shell_cmd_ls_and_cat_in_temp_dir() {
// No env overrides needed; test build hard-codes a hermetic zsh with empty rc.
// Create a temporary working directory with a known file.
let cwd = TempDir::new().unwrap();
let file_name = "hello.txt";
let file_path: PathBuf = cwd.path().join(file_name);
let contents = "hello from bang test\n";
fs::write(&file_path, contents).expect("write temp file");
// Load config and pin cwd to the temp dir so ls/cat operate there.
let codex_home = TempDir::new().unwrap();
let mut config = load_default_config_for_test(&codex_home);
config.cwd = cwd.path().to_path_buf();
let conversation_manager =
ConversationManager::with_auth(codex_core::CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation");
// 1) ls should list the file
codex
.submit(Op::RunUserShellCommand {
command: "ls".to_string(),
})
.await
.unwrap();
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandEnd(_))).await;
let EventMsg::ExecCommandEnd(ExecCommandEndEvent {
stdout, exit_code, ..
}) = msg
else {
unreachable!()
};
assert_eq!(exit_code, 0);
assert!(
stdout.contains(file_name),
"ls output should include {file_name}, got: {stdout:?}"
);
// 2) cat the file should return exact contents
codex
.submit(Op::RunUserShellCommand {
command: format!("cat {}", file_name),
})
.await
.unwrap();
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandEnd(_))).await;
let EventMsg::ExecCommandEnd(ExecCommandEndEvent {
stdout, exit_code, ..
}) = msg
else {
unreachable!()
};
assert_eq!(exit_code, 0);
assert_eq!(stdout, contents);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn user_shell_cmd_can_be_interrupted() {
// No env overrides needed; test build hard-codes a hermetic zsh with empty rc.
// Set up isolated config and conversation.
let codex_home = TempDir::new().unwrap();
let config = load_default_config_for_test(&codex_home);
let conversation_manager =
ConversationManager::with_auth(codex_core::CodexAuth::from_api_key("dummy"));
let NewConversation {
conversation: codex,
..
} = conversation_manager
.new_conversation(config)
.await
.expect("create new conversation");
// Start a long-running command and then interrupt it.
codex
.submit(Op::RunUserShellCommand {
command: "sleep 5".to_string(),
})
.await
.unwrap();
// Wait until it has started (ExecCommandBegin), then interrupt.
let _ = wait_for_event(&codex, |ev| matches!(ev, EventMsg::ExecCommandBegin(_))).await;
codex.submit(Op::Interrupt).await.unwrap();
// Expect a TurnAborted(Interrupted) notification.
let msg = wait_for_event(&codex, |ev| matches!(ev, EventMsg::TurnAborted(_))).await;
let EventMsg::TurnAborted(ev) = msg else {
unreachable!()
};
assert_eq!(ev.reason, TurnAbortReason::Interrupted);
}

View File

@@ -46,4 +46,6 @@ core_test_support = { path = "../core/tests/common" }
libc = "0.2"
predicates = "3"
tempfile = "3.13.0"
uuid = "1"
walkdir = "2"
wiremock = "0.6"

View File

@@ -6,6 +6,10 @@ use std::path::PathBuf;
#[derive(Parser, Debug)]
#[command(version)]
pub struct Cli {
/// Action to perform. If omitted, runs a new non-interactive session.
#[command(subcommand)]
pub command: Option<Command>,
/// Optional image(s) to attach to the initial prompt.
#[arg(long = "image", short = 'i', value_name = "FILE", value_delimiter = ',', num_args = 1..)]
pub images: Vec<PathBuf>,
@@ -48,6 +52,10 @@ pub struct Cli {
#[arg(long = "skip-git-repo-check", default_value_t = false)]
pub skip_git_repo_check: bool,
/// Force-enable the experimental apply_patch tool even for models that do not opt into it by default.
#[arg(long = "custom-apply-patch", default_value_t = false)]
pub custom_apply_patch: bool,
#[clap(skip)]
pub config_overrides: CliConfigOverrides,
@@ -69,6 +77,28 @@ pub struct Cli {
pub prompt: Option<String>,
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Resume a previous session by id or pick the most recent with --last.
Resume(ResumeArgs),
}
#[derive(Parser, Debug)]
pub struct ResumeArgs {
/// Conversation/session id (UUID). When provided, resumes this session.
/// If omitted, use --last to pick the most recent recorded session.
#[arg(value_name = "SESSION_ID")]
pub session_id: Option<String>,
/// Resume the most recent recorded session (newest) without specifying an id.
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
pub last: bool,
/// Prompt to send after resuming the session. If `-` is used, read from stdin.
#[arg(value_name = "PROMPT")]
pub prompt: Option<String>,
}
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq, ValueEnum)]
#[value(rename_all = "kebab-case")]
pub enum Color {

View File

@@ -278,7 +278,6 @@ impl EventProcessor for EventProcessorWithHumanOutput {
command,
cwd,
parsed_cmd: _,
..
}) => {
self.call_id_to_command.insert(
call_id,

View File

@@ -30,11 +30,14 @@ use tracing::error;
use tracing::info;
use tracing_subscriber::EnvFilter;
use crate::cli::Command as ExecCommand;
use crate::event_processor::CodexStatus;
use crate::event_processor::EventProcessor;
use codex_core::find_conversation_path_by_id_str;
pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let Cli {
command,
images,
model: model_cli_arg,
oss,
@@ -43,6 +46,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
dangerously_bypass_approvals_and_sandbox,
cwd,
skip_git_repo_check,
custom_apply_patch,
color,
last_message_file,
json: json_mode,
@@ -51,8 +55,15 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
config_overrides,
} = cli;
// Determine the prompt based on CLI arg and/or stdin.
let prompt = match prompt {
// Determine the prompt source (parent or subcommand) and read from stdin if needed.
let prompt_arg = match &command {
// Allow prompt before the subcommand by falling back to the parent-level prompt
// when the Resume subcommand did not provide its own prompt.
Some(ExecCommand::Resume(args)) => args.prompt.clone().or(prompt),
None => prompt,
};
let prompt = match prompt_arg {
Some(p) if p != "-" => p,
// Either `-` was passed or no positional arg.
maybe_dash => {
@@ -148,7 +159,7 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
codex_linux_sandbox_exe,
base_instructions: None,
include_plan_tool: None,
include_apply_patch_tool: None,
include_apply_patch_tool: custom_apply_patch.then_some(true),
include_view_image_tool: None,
show_raw_agent_reasoning: oss.then_some(true),
tools_web_search_request: None,
@@ -190,11 +201,29 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
let conversation_manager =
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
// Handle resume subcommand by resolving a rollout path and using explicit resume API.
let NewConversation {
conversation_id: _,
conversation,
session_configured,
} = conversation_manager.new_conversation(config).await?;
} = if let Some(ExecCommand::Resume(args)) = command {
let resume_path = resolve_resume_path(&config, &args).await?;
if let Some(path) = resume_path {
conversation_manager
.resume_conversation_from_rollout(
config.clone(),
path,
AuthManager::shared(config.codex_home.clone()),
)
.await?
} else {
conversation_manager.new_conversation(config).await?
}
} else {
conversation_manager.new_conversation(config).await?
};
info!("Codex initialized with event: {session_configured:?}");
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<Event>();
@@ -279,3 +308,23 @@ pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> any
Ok(())
}
async fn resolve_resume_path(
config: &Config,
args: &crate::cli::ResumeArgs,
) -> anyhow::Result<Option<PathBuf>> {
if args.last {
match codex_core::RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
Ok(page) => Ok(page.items.first().map(|it| it.path.clone())),
Err(e) => {
error!("Error listing conversations: {e}");
Ok(None)
}
}
} else if let Some(id_str) = args.session_id.as_deref() {
let path = find_conversation_path_by_id_str(&config.codex_home, id_str).await?;
Ok(path)
} else {
Ok(None)
}
}

View File

@@ -0,0 +1,10 @@
event: response.created
data: {"type":"response.created","response":{"id":"resp1"}}
event: response.output_item.done
data: {"type":"response.output_item.done","item":{"type":"message","role":"assistant","content":[{"type":"output_text","text":"fixture hello"}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp1","output":[]}}

View File

@@ -1,4 +1,5 @@
// Aggregates all former standalone integration tests as modules.
mod apply_patch;
mod common;
mod resume;
mod sandbox;

View File

@@ -0,0 +1,267 @@
#![allow(clippy::unwrap_used, clippy::expect_used)]
use anyhow::Context;
use assert_cmd::prelude::*;
use serde_json::Value;
use std::process::Command;
use tempfile::TempDir;
use uuid::Uuid;
use walkdir::WalkDir;
/// Utility: scan the sessions dir for a rollout file that contains `marker`
/// in any response_item.message.content entry. Returns the absolute path.
fn find_session_file_containing_marker(
sessions_dir: &std::path::Path,
marker: &str,
) -> Option<std::path::PathBuf> {
for entry in WalkDir::new(sessions_dir) {
let entry = match entry {
Ok(e) => e,
Err(_) => continue,
};
if !entry.file_type().is_file() {
continue;
}
if !entry.file_name().to_string_lossy().ends_with(".jsonl") {
continue;
}
let path = entry.path();
let Ok(content) = std::fs::read_to_string(path) else {
continue;
};
// Skip the first meta line and scan remaining JSONL entries.
let mut lines = content.lines();
if lines.next().is_none() {
continue;
}
for line in lines {
if line.trim().is_empty() {
continue;
}
let Ok(item): Result<Value, _> = serde_json::from_str(line) else {
continue;
};
if item.get("type").and_then(|t| t.as_str()) == Some("response_item")
&& let Some(payload) = item.get("payload")
&& payload.get("type").and_then(|t| t.as_str()) == Some("message")
&& payload
.get("content")
.map(|c| c.to_string())
.unwrap_or_default()
.contains(marker)
{
return Some(path.to_path_buf());
}
}
}
None
}
/// Extract the conversation UUID from the first SessionMeta line in the rollout file.
fn extract_conversation_id(path: &std::path::Path) -> String {
let content = std::fs::read_to_string(path).unwrap();
let mut lines = content.lines();
let meta_line = lines.next().expect("missing meta line");
let meta: Value = serde_json::from_str(meta_line).expect("invalid meta json");
meta.get("payload")
.and_then(|p| p.get("id"))
.and_then(|v| v.as_str())
.unwrap_or_default()
.to_string()
}
#[test]
fn exec_resume_last_appends_to_existing_file() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
// 1) First run: create a session with a unique marker in the content.
let marker = format!("resume-last-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
// Find the created session file containing the marker.
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
// 2) Second run: resume the most recent file with a new marker.
let marker2 = format!("resume-last-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let mut binding = assert_cmd::Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?;
let cmd = binding
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg("--last");
cmd.assert().success();
// Ensure the same file was updated and contains both markers.
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(
resumed_path, path,
"resume --last should append to existing file"
);
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}
#[test]
fn exec_resume_by_id_appends_to_existing_file() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
// 1) First run: create a session
let marker = format!("resume-by-id-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
let session_id = extract_conversation_id(&path);
assert!(
!session_id.is_empty(),
"missing conversation id in meta line"
);
// 2) Resume by id
let marker2 = format!("resume-by-id-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let mut binding = assert_cmd::Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?;
let cmd = binding
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg(&session_id);
cmd.assert().success();
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(
resumed_path, path,
"resume by id should append to existing file"
);
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}
#[test]
fn exec_resume_preserves_cli_configuration_overrides() -> anyhow::Result<()> {
let home = TempDir::new()?;
let fixture = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("tests/fixtures/cli_responses_fixture.sse");
let marker = format!("resume-config-{}", Uuid::new_v4());
let prompt = format!("echo {marker}");
Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("--sandbox")
.arg("workspace-write")
.arg("--model")
.arg("gpt-5")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt)
.assert()
.success();
let sessions_dir = home.path().join("sessions");
let path = find_session_file_containing_marker(&sessions_dir, &marker)
.expect("no session file found after first run");
let marker2 = format!("resume-config-2-{}", Uuid::new_v4());
let prompt2 = format!("echo {marker2}");
let output = Command::cargo_bin("codex-exec")
.context("should find binary for codex-exec")?
.env("CODEX_HOME", home.path())
.env("OPENAI_API_KEY", "dummy")
.env("CODEX_RS_SSE_FIXTURE", &fixture)
.env("OPENAI_BASE_URL", "http://unused.local")
.arg("--skip-git-repo-check")
.arg("--sandbox")
.arg("workspace-write")
.arg("--model")
.arg("gpt-5-high")
.arg("-C")
.arg(env!("CARGO_MANIFEST_DIR"))
.arg(&prompt2)
.arg("resume")
.arg("--last")
.output()
.context("resume run should succeed")?;
assert!(output.status.success(), "resume run failed: {output:?}");
let stdout = String::from_utf8(output.stdout)?;
assert!(
stdout.contains("model: gpt-5-high"),
"stdout missing model override: {stdout}"
);
assert!(
stdout.contains("sandbox: workspace-write"),
"stdout missing sandbox override: {stdout}"
);
let resumed_path = find_session_file_containing_marker(&sessions_dir, &marker2)
.expect("no resumed session file containing marker2");
assert_eq!(resumed_path, path, "resume should append to same file");
let content = std::fs::read_to_string(&resumed_path)?;
assert!(content.contains(&marker));
assert!(content.contains(&marker2));
Ok(())
}

View File

@@ -121,7 +121,7 @@ async fn test_writable_root() {
}
#[tokio::test]
#[should_panic(expected = "Sandbox(Timeout)")]
#[should_panic(expected = "Sandbox(Timeout")]
async fn test_timeout() {
run_cmd(&["sleep", "2"], &[], 50).await;
}
@@ -156,26 +156,27 @@ async fn assert_network_blocked(cmd: &[&str]) {
)
.await;
let (exit_code, stdout, stderr) = match result {
Ok(output) => (output.exit_code, output.stdout.text, output.stderr.text),
Err(CodexErr::Sandbox(SandboxErr::Denied(exit_code, stdout, stderr))) => {
(exit_code, stdout, stderr)
}
let output = match result {
Ok(output) => output,
Err(CodexErr::Sandbox(SandboxErr::Denied { output })) => *output,
_ => {
panic!("expected sandbox denied error, got: {result:?}");
}
};
dbg!(&stderr);
dbg!(&stdout);
dbg!(&exit_code);
dbg!(&output.stderr.text);
dbg!(&output.stdout.text);
dbg!(&output.exit_code);
// A completely missing binary exits with 127. Anything else should also
// be nonzero (EPERM from seccomp will usually bubble up as 1, 2, 13…)
// If—*and only if*—the command exits 0 we consider the sandbox breached.
if exit_code == 0 {
panic!("Network sandbox FAILED - {cmd:?} exited 0\nstdout:\n{stdout}\nstderr:\n{stderr}",);
if output.exit_code == 0 {
panic!(
"Network sandbox FAILED - {cmd:?} exited 0\nstdout:\n{}\nstderr:\n{}",
output.stdout.text, output.stderr.text
);
}
}

View File

@@ -423,32 +423,41 @@ impl CodexMessageProcessor {
// Determine whether auth is required based on the active model provider.
// If a custom provider is configured with `requires_openai_auth == false`,
// then no auth step is required; otherwise, default to requiring auth.
let requires_openai_auth = Some(self.config.model_provider.requires_openai_auth);
let requires_openai_auth = self.config.model_provider.requires_openai_auth;
let response = match self.auth_manager.auth() {
Some(auth) => {
let (reported_auth_method, token_opt) = match auth.get_token().await {
Ok(token) if !token.is_empty() => {
let tok = if include_token { Some(token) } else { None };
(Some(auth.mode), tok)
}
Ok(_) => (None, None),
Err(err) => {
tracing::warn!("failed to get token for auth status: {err}");
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth,
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
let response = if !requires_openai_auth {
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth,
},
requires_openai_auth: Some(false),
}
} else {
match self.auth_manager.auth() {
Some(auth) => {
let auth_mode = auth.mode;
let (reported_auth_method, token_opt) = match auth.get_token().await {
Ok(token) if !token.is_empty() => {
let tok = if include_token { Some(token) } else { None };
(Some(auth_mode), tok)
}
Ok(_) => (None, None),
Err(err) => {
tracing::warn!("failed to get token for auth status: {err}");
(None, None)
}
};
codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: reported_auth_method,
auth_token: token_opt,
requires_openai_auth: Some(true),
}
}
None => codex_protocol::mcp_protocol::GetAuthStatusResponse {
auth_method: None,
auth_token: None,
requires_openai_auth: Some(true),
},
}
};
self.outgoing.send_response(request_id, response).await;

View File

@@ -15,11 +15,17 @@ use tokio::time::timeout;
const DEFAULT_READ_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
// Helper to create a config.toml; mirrors create_conversation.rs
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
fn create_config_toml_custom_provider(
codex_home: &Path,
requires_openai_auth: bool,
) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
let requires_line = if requires_openai_auth {
"requires_openai_auth = true\n"
} else {
""
};
let contents = format!(
r#"
model = "mock-model"
approval_policy = "never"
@@ -33,6 +39,20 @@ base_url = "http://127.0.0.1:0/v1"
wire_api = "chat"
request_max_retries = 0
stream_max_retries = 0
{requires_line}
"#
);
std::fs::write(config_toml, contents)
}
fn create_config_toml(codex_home: &Path) -> std::io::Result<()> {
let config_toml = codex_home.join("config.toml");
std::fs::write(
config_toml,
r#"
model = "mock-model"
approval_policy = "never"
sandbox_mode = "danger-full-access"
"#,
)
}
@@ -124,6 +144,47 @@ async fn get_auth_status_with_api_key() {
assert_eq!(status.auth_token, Some("sk-test-key".to_string()));
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_when_auth_not_required() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));
create_config_toml_custom_provider(codex_home.path(), false)
.unwrap_or_else(|err| panic!("write config.toml: {err}"));
let mut mcp = McpProcess::new(codex_home.path())
.await
.expect("spawn mcp process");
timeout(DEFAULT_READ_TIMEOUT, mcp.initialize())
.await
.expect("init timeout")
.expect("init failed");
login_with_api_key_via_request(&mut mcp, "sk-test-key").await;
let request_id = mcp
.send_get_auth_status_request(GetAuthStatusParams {
include_token: Some(true),
refresh_token: Some(false),
})
.await
.expect("send getAuthStatus");
let resp: JSONRPCResponse = timeout(
DEFAULT_READ_TIMEOUT,
mcp.read_stream_until_response_message(RequestId::Integer(request_id)),
)
.await
.expect("getAuthStatus timeout")
.expect("getAuthStatus response");
let status: GetAuthStatusResponse = to_response(resp).expect("deserialize status");
assert_eq!(status.auth_method, None, "expected no auth method");
assert_eq!(status.auth_token, None, "expected no token");
assert_eq!(
status.requires_openai_auth,
Some(false),
"requires_openai_auth should be false",
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_auth_status_with_api_key_no_include_token() {
let codex_home = TempDir::new().unwrap_or_else(|e| panic!("create tempdir: {e}"));

View File

@@ -15,6 +15,7 @@ use crate::config_types::ReasoningSummary as ReasoningSummaryConfig;
use crate::custom_prompts::CustomPrompt;
use crate::mcp_protocol::ConversationId;
use crate::message_history::HistoryEntry;
use crate::models::ContentItem;
use crate::models::ResponseItem;
use crate::num_format::format_with_separators;
use crate::parse_command::ParsedCommand;
@@ -172,16 +173,6 @@ pub enum Op {
/// Request to shut down codex instance.
Shutdown,
/// Execute a user-initiated one-off shell command (triggered by "!cmd").
///
/// The command string is executed using the user's default shell and may
/// include shell syntax (pipes, redirects, etc.). Output is streamed via
/// `ExecCommand*` events and the UI regains control upon `TaskComplete`.
RunUserShellCommand {
/// The raw command string after '!'
command: String,
},
}
/// Determines the conditions under which the user is consulted to approve
@@ -423,6 +414,7 @@ pub struct Event {
}
/// Response event from the agent
/// NOTE: Make sure none of these values have optional types, as it will mess up the extension code-gen.
#[derive(Debug, Clone, Deserialize, Serialize, Display, TS)]
#[serde(tag = "type", rename_all = "snake_case")]
#[strum(serialize_all = "snake_case")]
@@ -523,7 +515,12 @@ pub enum EventMsg {
EnteredReviewMode(ReviewRequest),
/// Exited review mode with an optional final result to apply.
ExitedReviewMode(Option<ReviewOutputEvent>),
ExitedReviewMode(ExitedReviewModeEvent),
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
pub struct ExitedReviewModeEvent {
pub review_output: Option<ReviewOutputEvent>,
}
// Individual event payload types matching each `EventMsg` variant.
@@ -872,26 +869,7 @@ impl InitialHistory {
InitialHistory::Forked(items) => items.clone(),
}
}
pub fn get_response_items(&self) -> Vec<ResponseItem> {
match self {
InitialHistory::New => Vec::new(),
InitialHistory::Resumed(resumed) => resumed
.history
.iter()
.filter_map(|ri| match ri {
RolloutItem::ResponseItem(item) => Some(item.clone()),
_ => None,
})
.collect(),
InitialHistory::Forked(items) => items
.iter()
.filter_map(|ri| match ri {
RolloutItem::ResponseItem(item) => Some(item.clone()),
_ => None,
})
.collect(),
}
}
pub fn get_event_msgs(&self) -> Option<Vec<EventMsg>> {
match self {
InitialHistory::New => None,
@@ -951,6 +929,18 @@ pub struct CompactedItem {
pub message: String,
}
impl From<CompactedItem> for ResponseItem {
fn from(value: CompactedItem) -> Self {
ResponseItem::Message {
id: None,
role: "assistant".to_string(),
content: vec![ContentItem::OutputText {
text: value.message,
}],
}
}
}
#[derive(Serialize, Deserialize, Clone, Debug, TS)]
pub struct TurnContextItem {
pub cwd: PathBuf,
@@ -1042,10 +1032,6 @@ pub struct ExecCommandBeginEvent {
/// The command's working directory if not the default cwd for the agent.
pub cwd: PathBuf,
pub parsed_cmd: Vec<ParsedCommand>,
/// True when this exec was initiated directly by the user (e.g. bang command),
/// not by the agent/model. Defaults to false for backwards compatibility.
#[serde(default)]
pub user_initiated_shell_command: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
@@ -1312,19 +1298,4 @@ mod tests {
let deserialized: ExecCommandOutputDeltaEvent = serde_json::from_str(&serialized).unwrap();
assert_eq!(deserialized, event);
}
#[test]
fn serialize_run_user_shell_command_op() {
let op = Op::RunUserShellCommand {
command: "echo hi".to_string(),
};
let value = serde_json::to_value(op).unwrap();
assert_eq!(
value,
json!({
"type": "run_user_shell_command",
"command": "echo hi",
})
);
}
}

View File

@@ -0,0 +1,27 @@
▒▒▒▒▒██████████▒▒
█▒█░▓█ ██░▒▒░▓▓▓░░ █░███▒
▒▓░▒█ █░█▒░██░░█▒█░ █████▒ █▒
██░▓██▒██▓██ ▓░░ ██▓▒ ██
▒█░▓██▒▓▓ █ █░█▒██
█░░▓▓█▓░ █▒█░░▒▒▒
▓▓░▓ ░█ ░ ▒▒▒█ ▒▓▒░█▒░
█▓░░█ ░░ █▒▒█░▓░░ ░█▒▒░█▒░
██░░▒░█ ░█▓▒▒▒▒▓░ ███░▒▒█
░▒ ░░▒░░ █▒░█▒▓░░▒▓ ▓░░░░ ░
▓░▓░ ▒░ █▒▓▓ █░▓ ░░░▓░
▒▒ ▓░ █▒▓░░ ▓█ ▒░░░░
░░░░ ▓ █░▓▓ ▓░░ ▒▒▒█▒██████████ ░▓▓░ ░
█░▓ ░█ ▒▓░░██ ▓█░ ▓░░▒▒██████ ░▓▒░▒░
▒▓░▒▓ ░▒ █░▓░ ▒░░ ░ ▓▓▓▓▓▓ ░ ░░█░░
▒▒░░▒▒░█ ████▓ █▓░▓▓ ▓▓
▒▒▒░ ▒▒▒ ▓█▓█▓░▓▓
▒░░░█ █░▒▒ ▒░▒█░█▒░█
████░█ ████ ▓▒██░███▓▓█
░█░░█ ░▒███▒░█▒▒███████░█░███▒██░
█ ███▓█░░█▒░▓ ██▓░░▒██ ▒██
███▓░█░░▓▒▒▒▒▒▒██▓██░
░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒▓██████████▒▒
█▒▓░▓█ ██ ▒█░▓▓▓ █ █░ ██▒
▒▓░░█░█░▓▒░██░░█▒░░ ████▓▒ █▒
█░█▓██▒█░▓█ █ █▒▓▒ ██
▒█░▓█ ▒▓▓█ ███░█▒██
█░░▓▓▒▓░ ▒▓░░▒▒▒
▓░░█ ░█ ▒▓ ░▒▒█ █▓█░██░
█▓░░█▒░░ █▒▒▒▒▓▒▓ ▓▒█░░▒░
██░░▓░█ ░░░░▒▒▒▒ ░ ██░░▒█
░█ ░░ ░ █▒░██▒▓░ ░░░░ ░
█░▓░█▒░ █▒▓░█ ▓█▓ ▒░░░█░
▒▓ █░█ ▒▓░░ ▓▓█░ ░░ ░
░░░░█ █▒ ▓░▓▓▓ ▓▓░░ ▒▒▒█▒██▒███████ █ ▓░ ░
█░█ ░▒ ▓░░███▓█ █ ░▒██ ███ █ ░▓▒░▒░
▒█░▒▒ ░█ █░▓█ ▒░ ▓██▓██▒░░░▓▒▒██ ▓▓▓░░░░
▒▒▒░█▒░▒ ████ ░ █▓░▓░ ▓▓
█▒▒░▒ ▒█▒ ▒█▓█▓█▓▓
▒▒░░█▓█▓▒▒ ▒▓▓█░█▒░█
███░░▒ ████ ▒▒██░██░░▓█
█░█░░█ ████░░█▒▒██████░░▓░██ ▒██░
░ ███▓▒▓░░▓█░ ███░█▒██ █▒██
██░░██▒▓▒▒▒▒▒▒██▓██░
░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒▒██████████▒▒
▒▒▓░██ ░█░▒█░▓▓▓ ▒ █░ ██▒
▒▓░░███▒▓▓███░█▒▒░░ ██▒█▓▒ ▒█
█░░█░ ▒▒█░█ ▓ █ █▒▓▒ ██
██░▓▓░█░█ █▒█░████
▓░░█ ▒░█ ██▒█ █▒▒░░▒▒▒
▓░▓███░ █░ █▒█▓ ▒█░█▒▒
▓░▓░ █░ ▒░ ▒ ██ ▒▒░▓▒░
░░░▒ ░░░ ░░▓░█▒▒ ▒█░█░░
░░░░ ░░█ █▒ ░▒░▒ █ ▒░░ ░
░░▒░ ░ █░░▓ ▓░▓ ░░░▒░
█░▓▓ ▒█ ▒█░░░ ▒░▒ ░ ░░░░░░
██░░ ░░ ▓▓▓█░ ░▓ ▓▒█▒█▒██▒███████ ▒█░░ ░
█░▒░ ▓░ ▒░░▓░▓▒░█ ▓▒░░██████▒ ░▒▒░▒░
░▒░▒ ▒▒▒ ▒ █▓▓ ██ ▒▒█░██░░░░▓▒▒▓█ ██░░░▓█
▒░░▓ ▒▒▒ ▒███ ▓ ▓▓██▓
▓░░▒ █░░▒ ▒▓▓█▓███
▓█░░▒ ▒██▒ ▒▓░███▒░█
▒ ▒▒█▒ ▒░███▒▓░ ▒▒██░██░░▓█
░█░█▓ ██░██░░█▒▒███████░█░██ ▒▒█░
░░░██▒▒░░█▒█▓░░█░░░▒██ █▒░█
▒█░░░█▒▒▒▒▒▒▒▒██▓██░
░░ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒███████▒██▒
▒█▓░████ ▒▒██▒ ▓░▒ █░███▒
▒█░▓░█░█▒▓██░░█▒▒█████▒▒▓█▒█ ██
█▓█░█▒▒░█░█ ▓ █░██▒█ █▒
▒░▓░ ▓▓██ ▓▓ █░█░▒ ▒
▓░░▒ ░▓█ ███▒ ░█░ ▒▒▒
▓░░▓▒░░░ ▓▒░████▓ █▒█░▒▒██
▓░░▓▒░█▓ ▒█▒░▒▒▒▒▒ █▓░█▒▒▒
░░█░░░█░ ▒█▒░░▒▒ ▒▒ ▒█░ ▒░
░░█ █░ ▒▒█░░░██▒ ░░▓ ▓█
▒░░ ░░ █▓▒░░ ▓░ ░░░ ░░
░░░ ░░█ █░░█ ▒█▓▒ ░░░░░░
██░ ░░█ ▓▒▓░▓ ░█▓ ██▒▒▒██████████ ░░░ ░░
░▒ ░▓ ░░▓█░█░█ ░ ░░▓███ ████ ▒░ ▒░█░ ░
░░░ █▓░ █░░ █▓░░█ ░▒█▓████▓░░░░▓▒▒█░▒▓░░▓▓
░░▒ █▓▒ ████ ▒▓▓▓▓▓░
▒░▒ ███ ▓░█▓█▓░
▒░░█ ████▒ ▒█░▓▓ ▒▓█
█░░░▒ █████▒▒ ▒▓█░▒▓░▓▓▓░
░ ░░▒ ██░█░░██▒▓███████░░░░█░▒▒█░
███▒▓▒ ██▒░░░░░░████░ ▒██
█ ███▓▒▒▒▒▒▒▒▒█▒▓██░
░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒▒███████████▒
█▒▓█░██ ▒▒▓░▒██▒ █░ ███▒
▓▓▓█ ▒▒▓███░░█▒▒██░░░▓▒▒▒ ██▒█
▒▓██░█▓████ ░ ░██░▒█▒░▓█▒▒
▓██▓░░███ █░░░▓▒█▓
▒░░▓▓░█ ███ █░░█ ███
▒░░░▓░▓█ ░░ ▒▒░█ ▒░▓ ▒▒▒
░░░▓░ █░▒▓ ██▒▒ █░░ ▓░▒
░▓▓░░▓ █░░▒░▒▒█▒▒ █░▓ █░█
░▓▓░░ ▒▒░░░▒█▒▒▓ █░░ ░░█
░░░ ░ ░ █░░ ▓█▓░█ ░░ ▓▒
░░▒ ░ ▓ ▓█▓▓█░█ █ ░ ▓▒
░▓ ░░░ ▒▓██▓░░█ ████▒▒█████▒███ █░█░░░
█░▓░▓▒█ █░ ▓▓▒█▓░ █▓███ ▓ ███░ ▒░░█▓░
█▒█ ░▒░▒ ██▒▓▒▓█ ░▒██████░░▓░░░░█ ▓▒ ▒▒░
▒▒█ ▒▒█▒ ████░ ▒░▒░▓ ██
█▒█ ▒ ░█ █░█░█▓░█
░▓▒ ▒█░█▒ ▒█░░███░█
█░▓▒ ▒█░░█▒▒▓ ▒▓▓░░█░▒▓▓
█░░█▒ █░░░█░██▓▓███████░░░██ ▒██
░██▒▒▒ ░████░░░░░░███ ▒██▓
░█▒░▓█▒▒▒▒▒▒▒▒█▒▓██
░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▒█▒███████▒▒▒
█▒▓░██ ▒▒▒▓▓█▒▒███████▓▓▒
█▓█░ █▒▓▒█░▓██▒▒██▓█░██▒ ░█▒▓░
██░░▒▓▓████ ▓▒░▒▒ ████▒
██ ▓███▓ ░█ █░▒█▒▒▒█
▓░ ▒░█▓▒ ██ ▒▓ █▒▓█
▓ █░░░ ▓ ▒ ░ ░▒▒ ░█▒▒
▓░ █░██░ ░▓▓██▒▒▒ ▒▒ ░▒░█
▒██▒░░▓ ░▒░█▒ ░▒ ░█ ▓░█
░ ░░░░ █░▓███▒▒▒ ░ ░█░░
░░█░░▒ ░▒▒░█ ▓░░█ ░░▒░▓
░▓█░░▒▓ █ ▓ ▓▓█▓░ ░▒ ░░
▒░█░░░▒ ▓▓▓▒▓░░▓ ▒▓████▒▒▒▒▒▒▒▒█▒ ░▒▒░░
██▒▒░▓▒▓ ▒███▓██ ███████ ████░▓ ▒░▓░░
░▒▒░▒ ░ ▒ ▓█▓░ ░██████░░▒░░█░▓ ▒▒▒░▒█
░▒ ░▒▓▒ ░░ ▒█▓▓▓▓
░▒█▒███▒░ █▓░░█▒▓ ░
▒░██████▒ ▒██▓░░░▓█
█░▓▒█▒▓░███▒▓ ▒█▓█▒░░██
█░█ ██░░░░███████████░░███░▓███
███▒▒ ███████░▓▓▒█ ░ ▒███▒░
██░░██▒▒▒▒▒▒▒██████
░ ██ ░░

View File

@@ -0,0 +1,27 @@
▒█▒█▒█████▓▒▒▒
▒▓▓░ ░ █ ██ ▒██░████▓█▒
▒▓ ▒███ ▓██▒▒██▒▓█▒▓ ░ ████
▓█ ███▒██▒█ ▒░█▒█▒ ██▒░▒
██ ▒▓█ █▒ ▓██▒ ▒█▒▒▒
▓ ▓▓░█ ▓█▒█ ████▓▒█░
░ ▓▓█▒ ▒█▒░ █▒█ ░▒▒░
░ ░▓▓▒ ░░ ▒██ ░░ ░▒▓░
█░ █░ ▒ ░▒█▒▒ ▓▒███░█▓
░▓░░░▓ ░▓▒ █▒▒ ██░ ░▒
░▓░░▒█ ▒░ █▒█ ▓▓░░░▒
░▒░█░░ ▒█ ▒█ ▒░░░░░
░▓█▓░█ ░▓░█ ▓▓ ▒▓███▒▒▒█▒█▒▒█▒ ▒ ░░░░
░█░█▒░ ▒░ ▒░█ ░▓▒▒██████████░▓█ ░░ ░▓░▓
░▓██▒▒▒ ▒▒█▓░ ███▒██▒▒▒███░▓ ▒░▓█▒█ █
░▒██▒██ ▓░▒█▓ ▓
█░▒██▒ ▒ ▒█▒░█ █
░▒ ▒ ███▒ ▓▓██░▓▒▓
█░▒ ░▒▒▓██▒▒ ▒██▒█ ▓█▒█░
█░▒▒ ▒░██░░▒██▓▒▒▒███████▓▓▓████
█░█▒ ████░███░███░████▓░█░
░███▒▓▓▒▒▒▒▒▒███░██
░ ██ ░░░

View File

@@ -0,0 +1,27 @@
▒█▒░▓█████▓▒▒▒
█▒██░ ░██░█▒░▓
▒▓█░ ▒███ ░░█▒▒▒█▒█▒ ░▒██▒██
▒▓ ▒██ █▓ ░▒█ ██▓▒██
███▒▓█ █▒ ▒░█ ▓▒▒▒▒▒
░▓▓▓▓ ▒ ██ █░█░▒▒░█▒
░ ▓░█▓ ░ ███ █▒██▒░▓▒
▓░░▓█ ▒ █▒█▒█▓ ██▓▒█▒▓░
░▒▒░▓▓ ▒▒▒▒▒ ██ ░ ░░▒
░ ▓█░ █▒███▒█ ░░ ░░ █
░▓░░░░█ ░ ░ ▓▒█▓ ▓ ▒ ░░▓
░ ░░░▓ ░░████ █ ░ ░░░▓
░░ ░░▒ ███ ██▓▒█▒███▒▒▒██▒█▓ █ ▒░░▓
░ ░░ ░ ░▓░▓ ▓░ █▓▓▓█▒▒▒▓▓▓█▒░█ ░█░░▓ █
░▒▒▓▓ ▒█░ █▓▓▓░▒▒▒▒▒▒▒█ ▓▓▒░▓ ▓
░▒▒█▒█ ▒░ ░ █ ▓
░▒▒░░█▒ █░▓▓░█ █░
░█▒▓██ ▒▓█▒▓░▓ ▓░
█░▒█▓░░█ ▒ ▒▒██░░█ █▒█
█░▓ ░█░░█░▒▓█▒▒████░░░█░█▓░░▒▓░
█░██░░░░██████░▒█ █▒▒▓▓▒▓█
█░██░░▒░░░░▒██░░ █
░ ██ ░ ░

View File

@@ -0,0 +1,27 @@
▒██▒█▒▒███▓▓▒▓
▒▓▓█ █░▒▒██▒
▒▓█░▒███ █░█▒▒████ ░▒███ █
▒░█▓░██▒██░▓ ░▒█ ▒▒▒ █
██▒░█ ▓▓ ░█ ░▒█ █▒
▓██░█▓▒ ▒ ▓░▒ ▒▒█▒░█
▓█▓░█░▓░ ░▒▓▒ ▓▒█▒ ░█░▒
▒░▒▓█ ▓ ▓▒▒█▒ █ █ ▒▒▒
░█▒░ █ ▒█▒▒▒▒▒ ▒█▒█░▓▒░
░ ░░ ░ ░▒▓███▓ ░█▒█░█▒▓
█▒░░██ ▒▒░▓█▓▓▒▓ ░░░▓ ░░
▓█░░▒ ██▓██▓░█ ▓ ░░░░█
░░░░ ░ ▒░░▒░░ ▒██▒░▒█▒▒▒▒█ ░█░█░░
░█▓░ ▒ █▒▒░▓ ░▒███▓▒▒▓▒▒▒▓ ▓ ▒░░░░░
█░░█░ ████▓▒░▓░▒░█ ▓░▓░ ▓
░▓▒▓▒ ░ ░░ ▓█░▓▓ ▓
░▒▒░░▓▒ ▓█▒▓░▓ █░
░░██▒▒█ ▒░ █▓ ▓▓█░
█▓▒▒████░ █░█░▓██░▒▓
█░░████▒░█▓░▓█████░░█░▓▓░▒ █
█░░░░█░ ███████ ░▓▒██▒▒▓░
░██░▓▒▒▒▒▒▒███▓█ ▓█
░ █ ░ ░

View File

@@ -0,0 +1,27 @@
▒████▓███▓▒▒▓
▒▒██ ▒▓▓▓ █ █░██░▒█
█░█▓▓████ █▒██░█ ▓▒█ ▒▒
▒░█░▓█▒▒█ ██ █░▒░█▒
█▓█░█ ░▒ ▒▒██ ▒▒
█▓█▓▓ ▒ ▒ █▒░▒▓█▒
░█▓▓ ▓▒▒▒█ ▒ ▒▒▒█▒▒
░░░░ ▒▒█▓█▒██ ▓▒▒ █▒▒░
░░▒░ ███▓▒█▒ ▓ ▒▒ ▒▒▓▒
░ ░█ ▓ ░░█▓▓▒▒▓ ▒ ░░██░░
░ ▒ ░▒▒ █▓▒▓▓ ░ ░░░ ░░
░▒▒▒ █░▓ ▓██ ░ ▒░░░█░
░▓▓█ ░ ██▓░ ▓▒██████▒█▒ █░░░░░░░
░░▓░▒█ █ ▓ ░█████████▒▒ ▓ ░█░░░░
░▒▓░ ░ ▒█████░▒▒▒░█ ░ ░█░░░▒░
░█ ░ ░ ░░░░░ ▒▓ ▓█▓ ░
▒░░░▓ ▒ █░░░ ▓
▒░░▒▒▒█▓ ██ ▓ █░
█░░█░▒▒█ ▒░███▒▓█ ▓
▒░░██▓█░█▒█▓▒███░░░░█▓░▒ █
██▒ ████░█░███░▓▒▓██▒██
██▓▒▒▒▒▓▒▓████▒▓█░
░░ █░

View File

@@ -0,0 +1,27 @@
▒██████▓██▒▒▒
█▓█ ▒▓▒▒▒███░░▓█░██▒
█░ ▒████ ░▒██▓▒ ░█▒ ░▒▒
▓ ▓░ ░▓█ ██ ▒ █▒
░▒█░░ ▓█▓░█ ░░
░░█░ ▒▒█▒ █▓▒█ ▒░
▓▒▓░ ▒▒░░▒ █▓█▒▒▒█▒
░░░░ ░▒▒ ░▒ █ ▒█░░ ▒▒
▒ ░░ ▒▒▒ ▒▒█ ▒░▒░░░░░
░░░░ ▓░▒█░▒▒▒█ ░▒░▓██
▓░░░ ▒░▒▓▒░▓█ ░▓░░░░░░
▒▓░░ ▒░▓▓ ▓ ░ ░ ░░░░░░
░ ░█ ▒▓░▓ █▒▒████████ █░░░▒█▓░
█▓▓█ ▓ ██████████░░ ░ ░ ░░░▒
░░▓█ ░████▒░▒▒▓░ ▓█░ ░░░░░
░▓▓░██ ░ ░░░░░ ░ ▒█░░░░█
░░▒▒ █▒ ░░░░▓░░▓░
░▒██▒█▒ ▒░░▓▓▓█ ░
░░█░▒░█ ██░█▓▓▒ ▒▓
█▒ █▒▓█░█████▒██ ██▓█░ ██
█▒ ██░░░░██░█▓▒█▒ ██
██▓▒▓█▒▒███▓█▒▓█
░ ░░ ░

View File

@@ -0,0 +1,27 @@
▒██▒██▒███▒▒▒
█▓█▒████ █░█░▓ ██▒
▒░░▓███ ░██▓▒█▒██▓ █
██▓█▓ ░█▒▒ ▒▒█
█▒█▓░ ▒▒░▓█ ░█
▓░█ ▒▒█ ▒█▒░▒ ▓█
░▓▓ █░▒█ ▓▒▒▒█▒▒
░ ░ ▒░▒▓▒▒█ ▒█ ░░ ░
█▒▒░ █░█▓▒▒░▒ ▓▒░░░ ░
░░█ ▒▒▒░█░█ ░▒▓░▓ ▓
█░ ▒▓░▓█ ░ ░░░█░░░
▓░ ░ ▓█▓░▓ ░░░░░░░░
░░▓░ ▒░ ▓░ ▒▒████████ ░░░░▓█░░
█░░ ░ ▓░▓ ████████▒█▓░ ▒░█░░░
░▓█▒░█▓ ▓██▒▒░▒█░▓░█░░░░░░ ▓
░░█▒█ ░░ ░░░░░ ▒ ░█░░ █░
░▓▒▒▓█ ▓▒░ ▓░▒░
▒▓░▒▒▒▒ ██▓█ ▓ ▓█
▒▓░▒████ ▓█▓█▓▓ ▓▓
▒▒█░██▒███▒▒██░▓▓▒▒ ▓
█▒▒ ██▒▒░██░▒█▒░░ ██
░██▒▒▒█▓▓█▓▒░▒██░
░ ░ ░

View File

@@ -0,0 +1,27 @@
█▒▒░███████
██░▓█▒ ▒▓░█▓ █▒█
▓█░ █ ░██▒▒▒▒▒ █▒
▓░▓██ ▒░▒▒ ▓░░
░░▒▒ ░▒▒▒ █▒░
▒▒ ▒█▒ ▒█░▓▓█▒▒
░▓░░░██ ▓ ▒▒ ▒█▒
▒ ░ ▓▒ ▒ ░▓ ▒▒ ░
▓░░░ █░░░▒█ ░ ░▒▒░░░
░░▒ █░▓▓▓ █ ███░█▓▒█
░░░ ░▒▒░░░ ░▓░░█▓░░
░░░░ ▒ █ █ ░▒░░░░░░
░░▓▒ ░▓ ▒▒█████ ░▒░░░░░░
░░ ░▓▒█ ██████░▒ ░ ░░░░░░
▓░░█░░ ██▒░▒█▓ ░█░░░ ▒░
░▓██░ ░ ░░░ ░░░▒░
██▓░ ░ ▒ ▓▒░░░░▒▓
░░▓█▒▒▒ ░▓█ ▓░░█░
█░░▒██▒▓ ░░█░█░█ ▓█░
███▒█▒███▒█▓▒█ ▓▓ ░░
▒▒██▒░▓█░▓███▒ ▒▓
█▒▒▒▒██▓█▒ ▒▓░
░ █░░░░ ░

View File

@@ -0,0 +1,27 @@
▒▓▒███▓▒██▒
▓ ██ ▒█▓▒ ▒░█
░▓▓█ █▒██▒█░█ ▒▒
▓░▓█▓ ░▓▓ ▒▒
░░▒ ▓ ▒░▒▒ ▒█▒
░░█▒▒░▒ ░▓░█ ░░
█ ░░▒███ ▒ █░ ░▓█
░░ ░░░▒░ █ ░░ ▒░▒
▓▓░ ▓▓▒░▒ ░░░▓ ░
░░ ░░░▒▒ ░▓█░░▒░
░░▓ █░▓░▒ ░▓░▒▓░ ░▓█
░░▓ ░▓▓▒█ ░▓▒░░░▓ ░░
░▓ ░░ ▒▒▒████▒▒░░░░░░░
░ ▒█ █████▒░░ ░░░░░░░
░░▓ ░ ▒▒▓▒░█░░▒ ░░ ░
▓░░ ▓▓ ░ ░░░ ░░ ░
░░ ░ ▒▒▓ ░░ ▒░
░░░▒ ░░█▓░░░██
░░█▒ █░▒▒░░░░ ▓
▒░█▓▒█▒▒█▓▒░█ ▓▓░
▒▒█░ ▓░▓▒░█ ▓▓
██▒▒█ ▒▓█▓▒░
█ ░

View File

@@ -0,0 +1,27 @@
█▒████▒██▒
▓█▒▒▒▓▓▓█ ░█
▓░▓█▒▒ ░░ █░█
█░█▓░▒░██ ██ ▒
░▒░ ▒▒▓▒▒▒ ▓▓
░▒█▒██ ░░▒░█ █░
░░░█ ░ █░░ ░▓░░▒
▒▓█▓ ░░█ ▒░ ░
░ ░██ ▒░ ░░░▓
▒▓▒█▒▓░ ░░█░░█▓░
░▓░▓▒█░ ░░▓██░▓░
░▓█▓█ ░ ░░ ░ ░░░
█▓▒░▒▒▒█ ░░▓░░█░░
▒▒░ ██░▒ ░░▓ ░░ ░
▒░ ▓ ░░░░░░░▒░
░░█░██ ░ ▓░░░░ ▓░
░░░ ░░ ▒ ░░░░░ ░
▒░ ▒ ▓░░░ █▓▓▒░
░█▒ ░ ░ ▒▒█░ ░░
▒░▒▒█▓▒▓█░ ░░▒█
▒█▓ ▓▓█ ▓
█▒▒▓▒█ ▒█
█░█

View File

@@ -0,0 +1,27 @@
▒███▓███
▒ ▒░▓ █ █▒
▓░░░▓░ ▒▒▒
▓▓▓░▓░ █░░░
░░░▓░██ ░░
░░▒░▓▓░▒░ ▒░
░░▓█░░▓░ ░░
░░ █░░░░▒ █
░░▓░░█▓▒░░░░░
░░░░░ ░░░ ▒▒░
░░▒█ ░██░ ░░
░░░▒░ ░ ░▓░░
░░▒▒░▒░▓░░░░█
░░░░░░░░░ ░▒
░░▒░ ░ ░░░ ░▓
░░▒░░░░░░░░ ░
░▒▓▓▓░ ▓ ░
░░▓░░ ▓░░
▓░░░▒░█▒▒░▒
░▒█░▒ ░█░░
▒ ░ ▒▒ █▒▒█
██░ ██▓▓░░
█ ░ ░

View File

@@ -0,0 +1,27 @@
████████
░▒░ ▒░
░▒░█ ░▒▓▒
░░░▒▒▒ ░▓
░▓▓░█ ▒░░
█ ▓ ▒▒▓▒░
█▓ ░▓██ ░
▒░░░ ░▓▒▒
▒░▒▒█ ▓░
░░░░▒▒░░▒
░░░ ██ ░░░
░░░░▓▓▒▓░
░░░░░░░░░░
░░░░░░▒▒░░
░░░░░░░░░░
░░▓░░░░░ ░
░░░░░░░░░░
░░░░░░░░▓░
░░█░▒▓█░▒░
▒░▓░░▓ ░▒░
▒░█░ ░ ░ ░
█▓ ▓█▓
█ ██ ░

View File

@@ -0,0 +1,27 @@
███▓███▒█
▒▓ ▒▓░▒
░█░██▒▒░▒▒
░█ █ ░█░▓░
░░░ ░▒░░░ ░
█░▓░▓▒░▒ ░
▒░░░░░░░░ ░
▓░░░▓░▒░░░ ░
░░░░░░▒░░█ ░
░▒░░░░ ▓░▒ ░
░░ █░▒░▓░█▓░
░█ ▓░ ░░▓░
▒░ ░░░ ░░░█░
▒ ░░░█░░░ ░
▒░░░░█░█░░░
▒▒░░░ ▓▒░░░
░▓░░░▒▓░░▓░
░░▒▓█░░▒▓▓░
░█░▓██░█░░░
█▒ ░ ░░▒
░░█ ░░▓░▓░
░▓▒ ░█▒▒░
░ █ ░

View File

@@ -0,0 +1,27 @@
██░▓█▓█░█
▓▒▒░▓░▒▓▒▓█
░░ ░ ▓░▒█▒▒▒
█▒░ █░░░░██ ▓
░░▓░▓▒░ ░ █▓░██
░ ░█░░░░░░░ ▒░
▒░░ ▒ ░█░ ▓▓░▓
░░░░░▒░░░▒ ░ ░░
░░░░░░░░ ░░░ █▓░
░░░░▒▓░░▓ ▒█░░░░
░░ ▒░▓░▓▓█░▓▓░░
░░▒░▓▓▒ ▒▒▒▒ ▒▒░
░░░░░▒ ░▓▓▓█▒▒░░
░░░░░ ░█░░░░░█░░
░░ ▒ ░░ ░ ▒▒░█░░
░ ▒ ░ ░░░▒▒▓▒▒░
░█ ░█░▒ ░░░▓▒
░░░ ▒█▒▒██░▓░░
█░▓▒▓█ ▓░░▓░░█░
░░█ █▓▒░█░
█▒ ▓ ▒█░█▓▓▓
█▓░▓ ▒░▒▒▓
░ ░

View File

@@ -0,0 +1,27 @@
▒███▓███░▒▒
█▒▓░░▒▒▒█▒▓█▒
░░▓░█▒▓░▒░ ██▒▒
░▓░▓▓▓▒░█▓ ▒███░█
██▓░░█▓▓░░ ▓█▒▓
░▒▒▓▓ ░█▓█▒ ▒░█
▒░▓▒▒█ ░░▒█ ▒░░
░░▒░▓░░ ░░░█▓ ▓░
░░▒█░░░▒▒▒░░░ █▓░░
▒░░░░░ ░░▒█ ░░▒░
░░░░░▓ ▓░▒░█░░ ░▓▒░
░░░░░▓ ▒▓▒ ░▒░ ░▒░
░░░░░ ▓░░▒▒██▒░░░
░░░░░ ▓░▒▒█░░▒▓░ ░░
░░░░░ ▒▒░░▒▒█ ▓░█▓░░
░░░░ ▒▒░░█▒░▒█░▒░▓░
░░░░ ▒░▓ ░ ░▒█
█░░░░░ ░ ▓▓▒█░█
░▒█▒█░░█▓█ █░ ░█▒
█▒▓ ░░▒▒░▓█▓░▒██
█░█▓ ░▒ ███▓▓▒▓
▒██ █░█▒▒▓
░░ ░█ ░

View File

@@ -0,0 +1,27 @@
█▒█░▒█▒█░▒▒
▓██ █ ░░█░█▓░██
░░█▓█▓█░ ██ ████▒
▓ ░░ ▓▓░█▓▓░ ░ ██▒▒
░ ▓░░██░▒█ ▓▒░█
▓▓░█▓▓░▒█ ▒ ▒▓
░░▓▒▒░█░█ ▒█▒ ▒▓░█
▓░░ ░░███░ ▒▓░▒ █░▓
░▒░ ░▒░░░ ▓▒▒█▓ ░░
▒░░░ ░░░░ ▒░░▒ ░▓░
░░░░░▒▒░░ ▓▓ ▓ ▓█▓░
░░░░█░▒░░ ░▒░▒ ░░░
░▓░░░ ░░░ ▓▒ ░▒ █ ▓▒░░░
░░░░░ ▓░ ░▓░█░░ █ ░░░▒░
░░░░▓ █░ █▒ ▓ ░░▓ ▓ ░░█
░░░░░ █░ ▓█▓ ▒░░░▒░
░░░░░█░░ ▓░░░░
▒▒░░▒▓ ░█▒ ▓▒▒
█░█ ░░▒▒▒▒▒ ░░█░
▒▒▒ ▒▒▒▒▒████░▒░██▓
░▒ █▓ █▒█▓█▒█
█▒▒ █▒██▒▒▓█░
░ ░░░░ ░

View File

@@ -0,0 +1,27 @@
▒█▓█▒█▓█████
▒░█ ▓ ░░█ ▒▓▓▒█▒▒
▒░█░░▓ ▓▓▒███ ██ ▒░▒
█▓██ ▒█████▓ ░░ ▒ ▒░
█▓██ ░▓█░░█ ░░
░▓░█▓██▓▒█ ▒ █▒█
░░░▓▒░█░░ █░▒ ▓░▒
░░ ▓░░░░█ ░█▒▒▒ █▓░█
░█░▓ ░░█ ▒▓▒░▒ █░▒
░░░░░ ▒░░ ▒▒▓█ ░░░
░▓░░ ▒░█░ █ ▒░█▒ ░░▒
░ ▒▓█▒▒░░ ░▒██░░ ░░▓
░ ░█░▓░░ █▓▒█▒█▒▒██▒▒ ▒▒░▓
░░ ▒░ ░░░░ ▓▓▓▒▓ █░█ ░ ░░░░
░█ ██░░▒░ ▒▓ ░ ▒░█░ ██▓ ░
░ █░█ ░▓ ▒░░░
▒▒ █▒░░░█░ █░█░█░
▒░▒ ▒█▒ ▒ ░▓▒▒▒░
░░ ▒█░▒ ░▒ ▓ ░█▓
▒▒ ░ ▓░▒▒█▒▒▒██▒▓▓░ ▓
█░▒ ░▒░▒░ █████ █
██▒ █▒█▒▒▒▒▒▓█
░ ░░░░ ░

View File

@@ -0,0 +1,27 @@
▓█▓░▒▓█▓▒███▒
█▓▒ █▓▓░▓█░▒▓▒▓ ██
▒▓░▓ ████▒███░░█░ ░▒██
▓░ ▓▓▓▓▓▓█░ █░█░█▒░▒
▓▒█░▓█▓██ ▒▓░▒░▒
▓░▓▓█▓▓▓▒ ▒ ██ ░░
▒░▒▒ ░ █▓ ░█▓ ▒ ▒░██
░░░▓▓██░ ░███ ▒ █░░
░▒▒ ▒▓▒█░▒░ ▒░▓
░░░░░▓▒░ ▓█▓▒█ █▓░░░
░░░░░░░▓ ▓ ▒▓░▒░ █▓░░▓
░▒ ░░░░░ ▓█░▓▒░█ ▓░░▒
░░ ░▓░░░ ░░▒█▓░█ ███▒▒ ▓░░
▓ █▓░ ▓▓▓▓█▓▓ ▒█ ░▓ ░░░░░
░ ▓ ░░▒ ▒░░█ ▒███▓ ░██
█▓▓ ░ ░ ▒█░ ░░▓▓
░█ ▒░▒▒█▒█ ▒█▓▓░▒░
█▓▒███░█▒█ ▓█▓▓▒▒▓
░▓▒▒█▒░██▒ ▒▓░█░█░▓
░▓ ▒▒▒▒ ███▒▒▓█▒░░░█▒█
░▒░ ▒ ▒░ █▒████ ▒▓░
██ ░████▓▒▒▒▒██░
██░░ █ ░

View File

@@ -0,0 +1,27 @@
▒██▓█▒████▓░█▒
▒█░ █ ░█ █ ▒▓▓░▓ █▒█
▒▓ ░▓████ ▒███░░ ███▓▒▒█
▓██▓▓█▒ ███ ▒▓▒██░█▒
░░▓ █ ░▓ ░▓ █▒▒
░░▒▒▒██ █▒▓ ██░░
▓█ █░▓░█ ▓░█▒ ▒▓▒░
▒░▓█░░▓░ ░▓██▒ ░▓▒
░░░░ ░▒█ ░███▒░▒ ░░░
░░░ █░░ ░█▓█▒ ▓░░
▒░░░░▒░ ▓▓▒░ ░░▓
▒░░▒░░░ ▓▓▓▓█▓ ░█
░░░▓░░░ ▓▒▓▓▓▓▓▒ ▒▒████░▓ ▓ ░█
░░░░█▒▓ ▓▓▓▓▓░█ █▒█ █░ ░ █░░▒
█░░░▒░░█▒ ██░ ▒ █░█▓█▓░█ ░▒█░░
█▒░░░▓▓ ░ █▒█░ ░ █ ░░
██▒▒██▒ ░ ██ ░▒▒▓
█░▒▒▒█▒▒ ░ ░▒▓█░▓
█░▒▒▒▒░▒▒██ ▒▓██▒▓▓█▓
▒▒ █░░█░██▒▒▒▒████░█████
▒▒▒▒░▒█▓ ░████████░▒█
██▒░███░█▒▒▒▒▒██
░█ ░░ █ ░

View File

@@ -0,0 +1,27 @@
▒▒█░██████░▒▒
▒█ ▒█░░█ ░ ▒▓█▓ ░ ███▒
▒▓▒▓▓███ ░▒▓███░░ ██▓▒██▒▒
▓ ██░░ ▒████▓█▒
░ ▓░▓▓ █ ▓██▒
░▓▓▓░▓█ ██▒ █▒▒
█▓▓▓░█▓ ▓▓ ▒▒ ██▒░
▓▓█▓░▒█ ░░▓ ░ ░ ░▓█
█▒ ██░ ░█░▓░ ▒ ░░░
▒░░░▒░▒ ▒▒▒▒░▒█ ░░▓
░ ░ ▓░░ █ ▒█░░░▓ ▓░░░░
░░░ ░░░ ░▓▓░▓ ▓▒█ ░▓░░░░
█░░░░░░ ░▓██ ▓█ ▒████▓▓▒ ▒ ░█▓░
█░░░░██ ░███░▓█▓ ░▒█ ░█ ░ ▒░▓
░ ░░░░░ █▓▓ █░ ▒░███▓█ ░ ▓▓░█
▒░▒ ███ ░▒█ ▓██░░░░
█▒▒░ ▓▓▒ █▓██░█▒▓
▒░▒█░▒░▒█▓ ▒▓ ▓██░▓
█▒▒▒█░▒▓ ██▓ ▒█ ░██▓▓
▒▒ ▒█░▒▒ ██▓▒▒▒▒███▓▒█░▓ ░░
▒▒ ▒██░▓█░█████████ ▒█
██▒▒▓▒░█▒▒▒▒▒▒▓██░
░ ░░░ █ ░

View File

@@ -0,0 +1,27 @@
▓▒▒█▓███████▒▒
██░█▒▓██░▓█ ░█▓░░ ███▒
▒█▒▒▓██▓▓▓▒████▒░░███▓▒░ ██
▓ ▓▓▓▓▓ ░██▒▓▓██
▒░█▒████▒ ▒ ██▓█
▒░░░▓▓█ █▓▒ ▓▒█░
░▓▒▓▓▒▓ ▓░ ░▒ ▓▒▒█░
▓▓▓░░▓░ █░▒▒░▒░ ░▒█░▒
░ ▓██░ █▒█░█▓ ▒ ░░░
░░░░█░░ █░▒ █▒▓█ ░█░█
░ ░░▒█ █░ ▒▓▒ ▒█ ░░▒░
░░ ▒▒▒░ ▓▒▒█▓ ▒░ ░░ ░
░ ░ ░░▒ ░▒▒█▓░▒▓ ▒▒▒▒▒██▓█▓▒▒ ▒░▓░░
█▒ ░░░ ░░███░█▓ ░░▓█ █░▒░░░░ ░░█░
░▓█░▒▓▒▒ ░█▓░ ▒█ ░█░█▒▓██▒ ▓░▒▒▓
█▒░░░█▒ ░ ░▒██ ██ ▒▓▓█░
▒░░░░▒░ ░▒ ▓ ██░░▓
▒▓█▒░░▒▒██ ▓░▒█▒▒ ▓▓▓
█ █ ░░▓ ░▒ ▒▓█ ▒██░█
█▒ ██░█▒ ░░▓█▒▒█████░▒███░▓▓
██▒██░▓▒ █░████████ ███░
░█▓▒█ ██▒▓▒▒▒▒▒████
░ ░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▓▓▓█▒███████▒▒
▒█ █░▓██░ ████
██▒▓███ ▒█▒███▒░░███▓█░░██▒
██▓▓▓▒▓ ▒██ ▓▒█░█▒ ▒▒
▓ ▓███ █▓ █░▓▓▒▒█▒
█ ▓██▓ ▓ ▓▓▒▒ ▒ ░█▓▒
░█▓░░▓█░ ▓▓ ██▒ █▒ ░▓▓▒
░▒▒▒▓█▓▓░ ▒▒█░▒▒▒░ ░ ▒▓▒█
░▓░▒░▒▒ █░▒ █░ ░ ▒ ░░▒█
░█░░░░█░ █ ░░█▓░▒░ ░▓ █
░█ ░░ ░ ░▒▓ ░█ ▓░ ░░ ░
░░░▓░░ ░ ▒██▓▓░ ▓ ░▓░░█░
▒█ ░░░█░ ██▓▓░█▓░░ ▒██▒███▓▓█ ▒░░░█░
▒░ █▓ █ ▓░█░▓ ▒░ █▓▒█ █░▒ ░ ▓█░▒░░
▒ ░░░░█░ ██▓░░▒ ░██░░▓▓███ ░░░░░
█░▒▒▒▒░ ░ █▓ ▒▒░▒░
▒▒▒▓▒░█▒░▒ ▒▓░▓█▓██
█▓ ░░▒█ ░█ ▒▓█▒█▓░░█
▒ ██░▓▒ ░█▒ ▒█ ▒▓▒██▓░
█▓ ███▓▒ ░█████▒█████░▒▓██░███
█▒▒█▒█▒▒ █░█████ ██░█░▓
░ █░███▒▒▒▒▒▒█▒██
░ ░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▓▒▓▓█████████▒▒
█▓░████ ██▒
▒▓██░█▓▓▓▓▒███░░░░░████▒▒▓███
▒▓█▓▓ ▒░█▓█ ▒█ ██░░██
▓░▓██░░█░ █▓▒██▓░▒
░░▓░███░░ ██░▒ █ ░▒▒▒█
░▒▓░██▒░ ▒▓▓▓░█ ░▒▒▒░▒██
▓█░▒▒▓ ░ █░▓▒█▒▒ ▒▒ ▒▒░▒
░░░░░▓░ █░▓▒░░█ ██▒█▒░
░░▒░░▓▓ ▓█░▒█▒▒█ ░░▓ ░░░
██░░░▓░ ░ ░░█▓░░ ▓ ░▓▒█
░░░░▒░ ▓▒█████░ ▓▓░▒ ░
▒ ▓░░█ ░ ▒█ ▓▓▓▓▓▓ ██▒▒██████▓███ ░▓░░▒░
█▒ ░░░▒░ █▒█▓▓ ▓ █░░▒█ ██▒ ▒██ █░░░░
░█▒▒░▓ ░░ █░▓█▓ ▒▒██░░▒▒██ ░░ ▓
░█░█▓██▒▓ █▒▓▓░▓░█
█▒█░▓░▒█▒▒ ░▓█▒▓▓░█░
██ ░░░█ ░█ ▒█▒▓▓▓▒▓░
▒▒▒█░▒█ ░█▒ ▓█▒░█░█▒█
█▒█░██▒ ▒█████▒▒█▒███░▒▓█░█░▒▓░
██ █░░▒▒ █░ ████ ██░▒▒██
█ ▓██░░█▓▒▒▒▒▒▒▓███
░ ██ ░

View File

@@ -0,0 +1,27 @@
▒▒▒▓▓█████████▒▒
▒▓██▓░█ ▓▒█ ██▒
███░█▓█ █▒██▓██▒░░░█████▒ ██▒
█ ▓█ █ ▓▓██░▒ █ ██▓█ ▒▒
██░░██ █░ ██ ████▒
▓▓▓░░█ ▓ ▓███▒ ▒▓ ▒▒ ░
▓▒▓▓██▒░░ ▒▒░▒██░ █▒ ▒░ ░
█▓░▒▒██░ ░▒▒▓██▓▒ █▒ ▒░ ░
░░░░░█▓░ █▒▒█▒▒▒ ▒▓ ░░██
▒░░█░░▓░ ▒█░▒ ▒ █ ░ ░░▒░
░▒ ░ ▒ █▒▓▒ ▓ ░█▒░▓░
░░░▒░ ▒ ▒█▒█▓ ▓█ █▓░░░░
█░ ▒░▒ ░ ▓░█░▒ ▒░░ █▒▒▒█▒▒██▓███▒ ▒ ░░▒░
█ ░░░ ░ ▒▓▓▓█░ ▓█ ▒░░▓██ ███ ▒░█░█░▒░▒█░
▒█ ▒█▓ ░ ░░ ▓▓ █▓▒█░██▒▒████▓░░▓░▓░
▒█░▓░ █▒ ▓▒▒▒█▒▓
▒█░░▒▒▒█▒ ▒▓░▓█▓ ▓
▒▓░░░▒█▒▒▒ ▒████▓░██
██ ░░▓█ ▒██▒▒ ▒██ ░█▓░▒▓░
██▒█░▒█▓█░█████▒▒█▒███░▒██░█ ▒█
█▒░░▒▒ ▒ █░ ▓███ ██ █▒██░
░░▓███░███▒▒▒▒▒▒▓████░
░ █ ██ ░

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
oecxocccceeeedooeecceccce
eoxeccceoexccxxcdcxcccccoe cce
ocxoocecoocc oxxcccde co
ecxooceoo cccxoeco
oxxoocox oeoxxeee
ooxoc xc xceeec edexoex
ooxxccxe ceecxoxx eceexoex
ccxxexc xcdeeeeox cocxeeo
xecxeexx cexceoxxeo oxexx x
oxdx ex ceod cxd cxxxdx
c ee ox ceoxxc dc exxxx
xexx o oxooc oxe eddcdccccccccco xdox x
cxd xo eoxxcc oce oxxdeccccccccc xdexex
eoxeo xe cxoe exe cxcoooooo x xxcxe
eexxeexo ccccd ooeoocoo
eeex ede dcdcoeoo
exxxo cxee execxcexc
coccxo cocc deccxcccodc
cxcxxc eeoccexceeoooccccxcxccceoce
cccccdoxxcdxocccdxxdccc eccc
cccdxcxxdeeeeeeocdcce
eeccccccce

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
oeoxoccccceoeooo ccceccce
eoxxcxcedexccxxcdxxcccccde cce
oxcoocecxocc cccede co
ecxoocedoc ccccxoeco
oxxooeox eoxxeee
oxxcc xc edceeec cdcxocx
ooxxcexe ceeeeoeo oecxxex
ccxxoxc exxxeeee ecocxxeo
xccxe x oexcceoxc xexx x
cxoxcex ceoxc oco dxxxcx
c eo cxo eoxxc odce xx x
xexxccce dxooo doee eddcdccdcccccco o ox x
cxo xe oxxcccoc c xecccccccccccc xdexex
ecxeecxo cxoc exc oocdccexxxdddcc oooxexe
eeexoexe cccc e ooeoxcoo
oeexecece ecdcocoo
eexxoocoee eodcxcexc
cocxxe cocc eeccxccxedc
cxcxxc ccoccxxceeooocccxxdxccceoce
eccccdeoxxdcxccccxcdcccceccc
cccxxccddeeeeeeocdcce
eeccccccce

View File

@@ -0,0 +1,27 @@
eeeedcccccccccoee
eeoxcccxceeoeooo eccxccce
eoxxccceodcccxcddxxcccecde ceo
oxxcxceecxcco cccede co
ocxooeoxc ccecxocco
oxxocexc ooeo ceexxeee
oxooooxc cecoecd ecxoee
oxox ox exccecoc eexdex
xxxe xxe cxxdxoee cecxoxx
xxex xxc ce xexeco exx x
xxex xc ccxxd dxo xxxex
oxod ec ecxxe exe e xxxxxe
ccxx xx oooce xo decdcdccdcccccco ecxx x
cxex cox exxoeoexc oexxcccccceccccc xeexex
xexe eee eccoo oc eecxccxxxxddddc ocxxeoc
exxd eee ecccc ocoocoo
oxxe cxxe eodcococ
ocxxe ecce eoxcocexc
e eece exccoedx eeccxccxedc
cxcxco ccxccxxceeoooccccxcxccceece
exxcoeexxcdooxxcxxxdccccexcc
cecxxxcdeeeeeeeocdcce
eeccccccce

View File

@@ -0,0 +1,27 @@
eeedcccccccdooe
eooxccccceeooe dxeccxccce
ecxoxceoedccxxcddccccceedoeccco
oocxceexcxcco cxcceo cce
exoxcddcccoo cxcxe ce
oxxecxoc oooe xcx eee
oxxoexxe deeococd cecxeeco
oxxoexco ecexeeeee coxoeee
xxcexxce eoexxeecee ecx ex
xxc cxc eecxxxoce cxxo oo
exx xx coeee dx xxx xx
xxx xxc oxxc ecoe xxxxxx
ocx xxo deoxocxcoc ocdddccccccccco xxx xx
xe xo xxoceoxc e xxocccccccccccccex excx x
xxx cox cxx ooeec eecdccccoxxxxdddceeoxxooc
xxe coe cccc eoooodx
exe coco oxcocox
exxo cocce eoxooceoc
cxxxe cccccee edcxeoxddoe
ecxxe ccxcxxccedoooccccxxxxceeece
ccceoe cccdxxxxxxcccce eccc
cccccodeeeeeeeeoddcce
eccccccce

View File

@@ -0,0 +1,27 @@
eeeoccccccccooe
oeocxccceeoxeooeccxcccoe
odocceedcccxxcddccxxxoeeeccceo
eocceodcccccx eccxecexocee
occoxxccc ccxxxdecd
exxodxc ooo cxxococo
exxxoxoc xxceexo exd eee
xxxox cxeococee cxx dxe
xooxxo cxxexeecee cxd oxo
xooxx eexxxeceeo cxx xxc
xex xcx cxx ocdxc xx oe
xee xco dcoocxocc cxc oe
xo xxx eoocoxxc occcddcccccdcco cxcexx
cxoxoeo oxcodecox coccccccocccccccx exxoox
ceo xexe coeoedc xeccccccxxoxxxxc oeceex
eeo eece cccce exexoccc
cec ecxc oxoxcoxc
cxdececxce eoxxcooxc
cxdeceoxxceeo edoxxcxeoo
cxxoeccxxxcxccddoooocccxxxcc ecc
ecceee eccccxxxxxxcccc eococ
cxcdxdoeeeeeeeeoddccc
eccccccce

View File

@@ -0,0 +1,27 @@
eeeecdccccccoeee
oeoxccceeedooeecccccccdoe
odcxcoeddcxdccddccocxccecxceox
oceeedococc oexeeccccce
oc ooccocec ccxeceeec
oe excoe oo edcoeoo
o oxex ocecx eee xcee
oe oxocx xodcoeee ee xexo
ecoexeo cxexoecxe xo dxc
x exxx cxdccoeee cx xcxx
xxcxxe eeexc dxxc xxexo
xocxxeo oco docoe xecxx
excxxxe oooedxeo edccccddddddddoe xeexe
coeexoeo ecccocc occcccccccccccxo exoxx
xeexecx ecdooe xccccccxxexxcxo eeexeo
xe xeoe ee ecoooo
xeoecocex ooexceo e
exocccoce eocoxxeoo
cxdecedxccoeo eoocexxcocc
ccxo ccxxxxcccooooocccxxcccedcoc
cccee ccccccccxoodcce ecccee
cccxxooeeeeeeeoocccc
eccccccee

View File

@@ -0,0 +1,27 @@
eoecdcccccdeee
eodxce c ccceccxcccodoe
eoc eocccdccddccedoeocxcccco
oc occeccec exceoecccexe
oc eoccoe dcce eceee
o doeo dceo coccdeox
x oooe ecex ceccxeex
x xode xxceco xxcxeox
oe ox e eecee deocoxcd
xoxxxo xoe cee ccx xecc
xoxxeo ex cec ooxxxe
xexcex cec ecc exxxxx
xocoxc xoeccoo edcccdddcdcddce ec xxxx
xoxcex exccexc xdeeccccccccccxoc xxcxoxo
xocoeee eeode ccccecceeecccxo exocec c
xecoeco oxeco o
cxecoece eceecc o
xe ecccce oocoeoeo
cxecxeedccee ecceccdceoe
cxeecexccxxeccdeeecccccocodoococ
ccxoe cccccxcccxcccxccccoxce
ecccdddeeeeeeoccxcccc
ecccccceee

View File

@@ -0,0 +1,27 @@
eoexdcccccdeee
odcce eccxcexo
edce eocccxxcdddcece eecceco
eoc ecccoo xeo ccdeco
occeoc oe exc deeeee
xoooo e oc cxoeeexoe
xcoxcd e coccc ceccexoe
oxxoc e oececd cooeoeox
xeexoo eeeee cc x xxe
xcdcx oeccoec ex cex o
xdxxexc x e oeco dcecxxdc
x xxxo xxcccc c cxcxxxo
xx cxxe coo ocoeodcccdddccdcd occdxxo
x xx x edxo oe cdddceeddddcexo xoxxo o
xeeodc ece cdddxeeeeeeecc doexd o
xeecec excx o o
cxeexxce oxooec oe
xoedco eoceoeo oe
cxecdxxc e eeccxxc cec
cxdcxcxxcxedcdeccccxxxceooexeoe
cxcoxxxxccccccxdccoeeooeoc
ccxccxxdxxxxeccxxccc
ecccccce e

View File

@@ -0,0 +1,27 @@
eocdcddcccddeo
eoocc ccxeecce
edcxeoccccxcddccco eeccccc
excdxcceccxo xeo eee co
ocexcc od xo xeo ce
ocoxcoe e dxe eeoexo
ocoxcxde xeoe coece xcxe
exeoc o oeeoe c o eee
xoex c eoeeeee eceoxoex
x xx x xeococo eoeoxceo
cexxcc eexocooeo xxxocxx
ocxxe cooccoxc o xxxxc
xxxxcx exeexe eocdxdcddddc xoxcxx
xoox e ceexo xecccddedddeo dcexxxxx
cxxox ccccdexdxexc oxox c o
xoeoecx ee dcxooc o
xeexxoe oceoxo oe
xxcceec excoo oooe
cdeeccocx oxceocoeeo
cxxcoccexcdxdoooccxxceooee cc
cxxxxcxccccccccceoecceeoe
eccxdeeeeeeoccdccocc
ecccccce e

View File

@@ -0,0 +1,27 @@
eocccdcccdeeo
eecccedoo cccccxccxeo
oxcddccccccdccxo coec cee
excxoceecc co oxeece
oooxcc xe eeoo ee
ooooo e e cexeoce
xooo deeec e eeeoee
xexx eecoceco oee oeex
xxee ocooece o eeceeoe
x xc ocxxoodeeo d xxooxx
x ec eeecooedo x xxx xx
xede cxo occ x exxxcx
xooc x ocoxc oeccccccdcd cxxxexxx
xxoxeo c oc xcccccccccee o xcexxx
xdox x ecccccxdeexc xcxcxxxee
cxccx x eeeee eo ooo x
exxxd e oxex o
exxeeecd oc oc oe
cxxoxeec excooeoc o
exxccdcxcecdeoocxxeecdxe oc
ccecccccxcxcccededcceoc
ccodeeeededccoceoce
ccccee ce

View File

@@ -0,0 +1,27 @@
eocccccdcceee
odcceoeeecccxxdcxcce
oxcecccccxdccdeccxce eee
ocoxcedcc ccce ce
xecxe ocoxo ex
xxcx eeoe ooec ex
deox ceeexe coceeeoe
xxxe xee xe oceoxx ee
ecxx eee eec exexxxxx
xxxx oxeoxeeeo ccxdxdoc
oxxx exeoexoo xoxxxexx
eoxx eeooc o e x xxxxxx
xcxc eoeo oeecccccccc cexxeoox
cooo co occcccccccxx x x xxxe
xxoo eccccexdedxc ocxcxxxxx
xooxoc e eeeee x eoxxxxc
xxee ce xexeoxxox
xeccece exeoooc x
xxoxeec ocecooe eo
ceccedoxcccooeoccoooce oc
cce cccxxxxcceodece oc
ccdeooeecccdcedcc
ecccee e

View File

@@ -0,0 +1,27 @@
eocdcceccceee
ooceooocccxcxd cce
exxdccccxccdececod cc
ococo xoee eeo
oecoe eexdc xo
oxc eeo eoexe oo
xoo cxeo oedecee
x x exeoeeo eo xx x
oeex cxcoeexe oexxx x
xxc eeexcxc xeoxd o
cxc eoxoccx cxxxcxxx
oxce oooxo xxxxexxx
xxox ex ox eecccccccocxxxxooxx
cxx x oxo cccccccceoox exoxxx
xoceeoo occddxdcxoeoexxxxx o
xxoec ex eeeeece xoxx cx
xoeeoo oexc oxee
eoxeeee ocoo o oc
eoxeccco ocoodo oo
eecxcceccoeeoceooee dc
ceecccddxcceecexe oc
ecoeeeooocddeeoce
eccce ce

View File

@@ -0,0 +1,27 @@
oedxcooccoo
ocxdoecedxcdcceo
ooxcccxcceeeee ce
oxoco exee oex
xxee xeee cex
eec eoe eoxodoee
xoxxxooc o ee ece
e x coe e xdcee x
oxxe oxxxeo x xeexxx
xxe cxood c oocxoodo
xxx xeexxx xoxxcoxx
xxxe e ccc xexxxxxx
xxoe xo eeccccc xexxxxxx
xxcxoeo ccccccxe xcxxxxxx
oxxcxe ccdxdcoc xcxxx ee
xoccx ecxxx xxxex
ccdx x ecoexxxxeo
xxooeee xoo oxxox
coxxecceo exoxoxc ooe
cocececcceooeo od xe
ceeccdxocedcoce eo
cceeeeccdce eoe
ecceeeece

View File

@@ -0,0 +1,27 @@
eodccodecoe
ocoocecoe cexo
xoocccecoeoec ee
dxoco cxoo ee
xxe o exee ece
xxceeee xdxcc xx
ccxxeccc e cx cxoo
xx xxxex o xx exe
oox ooexe xxxo cx
xxc xxxee xooxxex
xxo cxoxe xoxdoe xoo
xxo xdoecc eoexxxd xx
xo xx eeecccceexxxxxxx
xc ec cccccexxcexxxxxe
xxo x eddexcxxd xx x
oxx oo ecxxx xx cx
xxcx eeoc xx ex
xxxe xxooexxco
xxce oxeexxxx o
excdeceecoexc oox
eecxcoeoexc oo
cceeoceocoexc
ccceccc

View File

@@ -0,0 +1,27 @@
odccooecoe
ooeeeoodccxo
dxoceecxe cxo
cxcoxexco coce
xex eeoeee cod
xeoeco xxexc cx
xxxo x cxx edxxe
eoco xxo ex x
cxcxcc excexxo
eoecedx xxcxxoox
xoxddcx xxoccxox
xococcx xxce xxx
coexeeec xxoxxcxx
dexcccxe xxd xx x
cex d xxxxxxxex
xxceco e oxxxx ox
xxx xx e exxxx x
exc e dxxx odoee
xce x x eece xx
exeecoeoox xxec
ecoccodc co
ceeoec ec
ccecccc

View File

@@ -0,0 +1,27 @@
dccoocco
e exo ccoe
oxxxoeceee
oooxdx cxxx
xxxoxco cxx
xxexooxex ex
xxocxxox xx
xxccxxxxecc o
xxoxxcoexxxex
xxxxx xxx eex
xxecc xccxcxx
xxeex x cxoxx
xxeexeeoxxxxc
xxxxxxxxx xe
xxexcx xxx xo
xxexxxxxxxxcx
xeooox o x
xxoxxcdxxc
oxxxexceexe
xecxec xcxx
d x ee oeec
ccx ccodxx
cccecce

View File

@@ -0,0 +1,27 @@
cccoccco
xexc ccdx
xexc xeoe
xxxeeecxo
xooxocexx
cco eeoex
cocxocccx
exex xoee
exdeo ox
xxxeeexxe
xxxcoccxxx
xxxxoodox
xxxxxxexxe
xxxxxxeexe
xxxxxxxxxx
xxoxxxxxcx
xxxxxxxxex
xxxxxxxxox
xxcxeooxex
exdxxocxee
excx x x e
cocc oco
cccccce

View File

@@ -0,0 +1,27 @@
occooocdo
eoceoxe c
xoecceexed
cxo o xcxox
xxx eexxx x
ccxoxoexe x
exxexxxxx x
oxxxoxexxx x
xxxxxxexxc x
xexxxxcoxe x
xxccxexoxoox
xocoecccxxox
ex xxx xxxcx
e xxxoxxxcx
exxxxcxcxxx
eexxx oexxx
xoxxxeoxxox
xxedcxxeodx
xcxoooxoxex
cecc xccxee
xxo xxoxox
xoe xceee
ecccccce

View File

@@ -0,0 +1,27 @@
ocxdodcxo
oeexoxeoeoc
xxc e oxeceee
oee cxxxxoc o
xxoxoexcxccdxco
xcxoxxxxxxx ex
exxccecxox ooxo
xxxxxexxxecx ex
xxxxxxxxcxxx oox
xxxxdoxxo ecxxxx
xxc exoxoooxooxx
xxexdoeceeeeceex
xxxxxecxoodcddxx
xxxxx xcxxxxxcxx
xx e xxcxceexcxx
xcecxcxxxeeoeex
xo cxcxe xxxoe
xxx eoeeocxoxx
cxoedo dxxoxxox
xxc c ccoexcx
oe o ecxcdoo
coxo eeeeo
ccceccce

View File

@@ -0,0 +1,27 @@
ecccdoccxee
ceoxxeeeceoce
xxoxceoxexccoee
xoeoooexcocecccxo
ocdxecooxx oceo
xeeoo eooce exo
exoeeocxxec exx
xxeeoxx xxxcd ox
xxecxxeeeeexx odxx
exxxxx exeo xxex
xxxxxo oxexoxx xoex
xxxxxo eoe xex cxex
xxxxx oxxedcodxxx
xxxxx oxdeoxxeoxc xx
xxxxx eexxeeccoxcoxx
xxxx eexxceeeoxexox
xxxx exo xcxec
cxxxxx cx ooecxc
xeceoxxcoo ox xoe
coeo xxeexocdxecc
cxoo eecoocdoeo
cecc cceceedc
ceececce

View File

@@ -0,0 +1,27 @@
odcxeodcxee
occ ccxxceodxco
xecoooox ccccccoe
ocxx ooxoooece ccee
xcoxxooxec oexo
ooxoooeec e eo
xxoeexoxc ece eoxo
dxx xxcccx edxe cxo
xexccxexxe oeeoo xx
exxxcxxxx exxe xox
xxxxxeexx do o ocox
xxxxcxexx xexe cxxx
xoxxecxxx oecxd c oexxx
xxxxx oxc xdxcex c eexex
xxxxo cx ce o exo ocxxc
xxxxx cx oco exxxex
xxxxxcxx oxxxe
dexxeocxce oee
cxo exeeeee exce
eee eeeeecocceexoco
cxe cd cdcocec
cee ccecoeedce
eceeeecce

View File

@@ -0,0 +1,27 @@
eodceodcccco
exc dcxxccedoecee
exceeo ooecccccccexe
ooco ecococd ee ecex
ooco xooxxc cxx
xoeooccoec e ceo
xxxoexoxxc cee oxe
xx oxeexo xoeee coxo
xcxocxxcc eoexe cxe
xxxxxcdxx eeoo xxx
xoxxcexcx o exce xxe
xceoceexx xecoex exo
xc eoxoxx coececddccee eexo
xx eecxxxx oooeo cxcce xxxx
xc ccxxex eo x excx ccocx
x cccxc xo exxe
ee cexexox oxoxox
exe cece e xoeeee
xx ecxe xe oc cxco
ee ecoxeeceeecceodxco
cxe eeeex cccccccoc
cce cceceeeeeoc
eceeeecce

View File

@@ -0,0 +1,27 @@
oodxedcddccoe
ooe cdoxoceededcco
eoeo ococecccxxcxcxdco
ox cooooodcx cxoecexe
oeceococc eoeexe
oxooooode e cc cxx
exeecx oo xcd e exoo
xxeooccx xoco e ccxx
ccc xee eoecxee cexo
xxxxxddx ocoec coxxx
xxxxxxxo d eoxex coxxo
xecxxxxx ocxoeeoc oxxe
xx xoxxx xxecoxc cccee dxxc
coc cox oooocoo ecccxo xxxxx
x o xxe exxo eccco xcc
cod xcxc eoe xxoo
xo exeeoeo ecooxee
coecocxoeo dcooeeo
xdeeoexoce eoeoxoxo
xd eeee ccoeedcexxxcec
cxeececex ccdccccceoe
cco ecoccdeeeecce
cceecccce

View File

@@ -0,0 +1,27 @@
eoodceccccdxoe
ecx ccxccccedoxdcceo
eoceooccccecccxxccccdeeo
ocoodce coc eoeccxoe
xeo oced xo cee
xxeeeco ced ccxx
occcxoeo oxcec eoex
exooxxdx xooce cxoe
xexx xec xooceee xxx
xxx oxx xooce oxx
exxxxex oddecc xxo
exxexxx ooooco xcc
xxxoxxx oeoooooe ddccccxd ocxc
xxxxceo dooooxc cdcccxce oxxe
cxxxexxoe cox e cxcdcdxc xecxx
cexxxoo x ceoe e c xx
coeeocecx oc xeeo
cxeeeoee x xeooxo
cxeeeexeeco edcceooco
ee coxxoecceeeecccoxcccoc
ceeeexeoo ecccccccceec
cceecocxoeeeeeocc
ec eecccce

View File

@@ -0,0 +1,27 @@
eecxocccccxee
eocecxxccxcedoocxcccoe
eoedoccc eeocccxxcccdeocee
oc ooxe eocccdce
x oxoo c ocoe
xoooxoc ocec cee
cddoxoo oocee ccex
doodxeo xxocx ecxoo
cecccx xoxox e xxx
exxxexe eeeexec xxo
xcxcoxx o ecxxxo oxxxx
xxecxxx xodeo oec xoxxxx
cxxxxxx xooo oc dccccdde ecxcoe
oxxxxoo xoocedco xecccxcc x exo
x xxxxx coo oe excccdc xcdoxo
execooo xeoc dccxxxe
ceex ooe cdccxoeo
execxexeco eocdcoxo
ceeeoxed coo ecc xccoo
ee ecxeecccdeeeecccdecxocxe
ee eccxooeccccccccccecc
ccoeeoexoeeeeeedcce
eceeecccce

View File

@@ -0,0 +1,27 @@
oeecdcccccccee
oceodocceoccecoexcccce
eceeoccoooeocccdxxcccdexcco
oc oodoo xocedoco
exoeoccce e ccoc
exexooo ode oeox
xoeooeo oxcxe oeeox
oooxxoe cxeexee eeoxe
x oocx ceoecd e xxx
xexxcxx cxeccedc xoxo
x cxxec cx eoe ec xxex
xeceeex oeeco ex xxcx
x ecxxe xeecoeeo dddddccdcdee exoxx
ce xxx xeoocxoo xxocccxdeeex xxce
xdcxeoee xcox ec cxcxcddccd dxeed
cexxxce x xeoc cc eooce
exxxxex xe ococxxo
edcexxeeco oxeceecooo
co ccxxocxe edccecoxoc
ce ccxcecxxdceeoocccxecccxoo
coeccxde cxccccccccccoce
ecoeccccedeeeeeooccc
e eccccce

View File

@@ -0,0 +1,27 @@
eoodcdcccccccee
eccoxocce cccco
oceoccc eodcccdxxcccdoxxcce
ocdooeoceoc decxoecee
o dococoo cxooeece
ccocco o ddee e xcoe
xcoxeocx odccoe ce xooe
xeeeooooe eeoeeeee e eoeo
xdxexee cxe cxcx e xxec
xcxxxxcx o xxooxex xoco
cxccxx x eeo xc ox c xx x
xxeoxx x ecoode oc xoxxcx
ec xxxcx ocooxcoxe dccdcccddo exxxox
eecco o oeoxo ex coecccxdcec ocxexe
e xxxxox cooxee xccxxddccc exxxx
cxeeeexcx co eexee
eeeoexoexe eoeococc
cdcxxeo xo edcecoxxc
ce ccxdecxoe ecceoeccoe
cd cccdecxccccoeoocccxedcoxcoc
cceececee ccxccccccccxcxoc
cx ccxccoeeeeeeoeccc
e ecccccce

View File

@@ -0,0 +1,27 @@
eoeddccccccccoee
odxccccc ccoe
eoccxcdoooecccxxxxxcccceedcco
eocoocdeodc eocccxxco
oedocxeox coeccoxe
xxoxoccxx ocxe o xeeeo
xeoxocex eddoxc eeeexeco
ocxeeocx cxdecee eeceexe
xxxxxox cxdexxo cceoex
xxexxoo ocxeceec exo xxx
ccxexox x xxooxx o xoeo
cexexex oeooccox ooxe x
e oxxo x ec oooooo ocddccccccdcoo xdxxee
ce xxxex cecoococ oxxeccccdceccc cxxxx
xoeexo xe oxooo eeccxxddcc xx o
xcxoooceo oeooxoxc
cecxoxecee edceooxoe
cccxxxo xo eceodddoe
eeecxeocxce ocexcxcec
ccecxccececcccceeodcccxedcxceeoe
coccxxeecccxcccccccccxeecc
ccoccxxcdeeeeeedcccc
e ccccccce

Some files were not shown because too many files have changed in this diff Show More