Add new model_provider flag for compression to enable request
compression. We support zstd and gzip, server also supports brotli
You can test this against the sign in with chatgpt flow by adding the
following profile:
```
[profiles.compressed]
name = "compressed"
model_provider = "openai-zstd"
[model_providers.openai-zstd]
name = "OpenAI (ChatGPT, zstd)"
wire_api = "responses"
request_compression = "zstd"
requires_openai_auth = true
```
This will zstd compress your request before sending it to the server.
Fix this: https://github.com/openai/codex/issues/8479
The issue is that chat completion API expect all the tool calls in a
single assistant message and then all the tool call output in a single
response message