adding --ollama

This commit is contained in:
pap
2025-08-03 17:44:34 +01:00
parent dc15a5cf0b
commit 2cfb2a2265
9 changed files with 612 additions and 3 deletions

View File

@@ -3,6 +3,7 @@
Codex supports several mechanisms for setting config values:
- Config-specific command-line flags, such as `--model o3` (highest precedence).
- Convenience provider flags, such as `--ollama` (equivalent to `-c model_provider=ollama`).
- A generic `-c`/`--config` flag that takes a `key=value` pair, such as `--config model="o3"`.
- The key can contain dots to set a value deeper than the root, e.g. `--config model_providers.openai.wire_api="chat"`.
- Values can contain objects, such as `--config shell_environment_policy.include_only=["PATH", "HOME", "USER"]`.
@@ -56,6 +57,13 @@ name = "Ollama"
base_url = "http://localhost:11434/v1"
```
Alternatively, you can pass `--ollama` on the CLI, which is equivalent to `-c model_provider=ollama`.
When using `--ollama`, Codex will verify that an Ollama server is running locally and
will create a `[model_providers.ollama]` entry in your `config.toml` with sensible defaults
(`base_url = "http://localhost:11434/v1"`, `wire_api = "chat"`) if one does not already exist.
If no running Ollama server is detected, Codex will print instructions to install/start Ollama
and exit: https://github.com/ollama/ollama?tab=readme-ov-file#ollama
Or a third-party provider (using a distinct environment variable for the API key):
```toml