Skip to content

Commit bb3fa0f

Browse files
Numman AliNumman Ali
authored andcommitted
feat: add gpt-5.2-codex support
1 parent 909adb7 commit bb3fa0f

20 files changed

+390
-88
lines changed

AGENTS.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex,
44

55
## Overview
66

7-
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, and `gpt-5.1` models through their ChatGPT subscription instead of using OpenAI Platform API credits. Legacy GPT-5.0 models are automatically normalized to their GPT-5.1 equivalents.
7+
This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.2-codex`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5.2`, and `gpt-5.1` models through their ChatGPT subscription instead of using OpenAI Platform API credits. Legacy GPT-5.0 models are automatically normalized to their GPT-5.1 equivalents.
88

99
**Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management.
1010

@@ -41,7 +41,7 @@ The main entry point orchestrates a **7-step fetch flow**:
4141
1. **Token Management**: Check token expiration, refresh if needed
4242
2. **URL Rewriting**: Transform OpenAI Platform API URLs → ChatGPT backend API (`https://chatgpt.com/backend-api/codex/responses`)
4343
3. **Request Transformation**:
44-
- Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
44+
- Normalize model names (all variants → `gpt-5.2`, `gpt-5.2-codex`, `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`)
4545
- Inject Codex system instructions from latest GitHub release
4646
- Apply reasoning configuration (effort, summary, verbosity)
4747
- Add CODEX_MODE bridge prompt (default) or tool remap message (legacy)
@@ -98,27 +98,31 @@ The main entry point orchestrates a **7-step fetch flow**:
9898
- Plugin defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"`
9999

100100
**4. Model Normalization** (GPT-5.0 → GPT-5.1 migration):
101+
- All `gpt-5.2-codex*` variants → `gpt-5.2-codex` (newest Codex model, supports xhigh)
101102
- All `gpt-5.1-codex-max*` variants → `gpt-5.1-codex-max`
102103
- All `gpt-5.1-codex*` variants → `gpt-5.1-codex`
103104
- All `gpt-5.1-codex-mini*` variants → `gpt-5.1-codex-mini`
105+
- All `gpt-5.2` variants → `gpt-5.2`
104106
- All `gpt-5.1` variants → `gpt-5.1`
105107
- **Legacy mappings** (GPT-5.0 being phased out):
106108
- `gpt-5-codex*` variants → `gpt-5.1-codex`
107109
- `gpt-5-codex-mini*` or `codex-mini-latest``gpt-5.1-codex-mini`
108110
- `gpt-5*` variants (including `gpt-5-mini`, `gpt-5-nano`) → `gpt-5.1`
109-
- `minimal` effort auto-normalized to `low` for Codex families and clamped to `medium` (or `high` when requested) for Codex Mini
111+
- `minimal` effort auto-normalized to `low` for Codex families (including GPT-5.2 Codex) and clamped to `medium` (or `high` when requested) for Codex Mini
110112

111113
**5. Model-Specific Prompt Selection**:
112114
- Different prompts for different model families (matching Codex CLI):
115+
- `gpt-5.2-codex*``gpt-5.2-codex_prompt.md` (117 lines, Codex CLI agent prompt)
113116
- `gpt-5.1-codex-max*``gpt-5.1-codex-max_prompt.md` (117 lines, frontend design guidelines)
114117
- `gpt-5.1-codex*`, `codex-*``gpt_5_codex_prompt.md` (105 lines, coding focus)
118+
- `gpt-5.2*``gpt_5_2_prompt.md` (GPT‑5.2 general family)
115119
- `gpt-5.1*``gpt_5_1_prompt.md` (368 lines, full behavioral guidance)
116120
- `getModelFamily()` determines prompt selection based on normalized model
117121

118122
**6. Codex Instructions Caching**:
119123
- Fetches from latest release tag (not main branch)
120124
- ETag-based HTTP conditional requests per model family
121-
- Separate cache files per family: `codex-max-instructions.md`, `codex-instructions.md`, `gpt-5.1-instructions.md`
125+
- Separate cache files per family: `gpt-5.2-codex-instructions.md`, `codex-max-instructions.md`, `codex-instructions.md`, `gpt-5.2-instructions.md`, `gpt-5.1-instructions.md`
122126
- Cache invalidation when release tag changes
123127
- Falls back to bundled version if GitHub unavailable
124128

CHANGELOG.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,25 @@
22

33
All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD).
44

5+
## [4.2.0] - 2025-12-19
6+
7+
**Feature release**: GPT 5.2 Codex support and prompt alignment with latest Codex CLI.
8+
9+
### Added
10+
- **GPT 5.2 Codex model family**: Full support for `gpt-5.2-codex` with presets:
11+
- `gpt-5.2-codex-low` - Fast GPT 5.2 Codex responses
12+
- `gpt-5.2-codex-medium` - Balanced GPT 5.2 Codex tasks
13+
- `gpt-5.2-codex-high` - Complex GPT 5.2 Codex reasoning & tools
14+
- `gpt-5.2-codex-xhigh` - Deep GPT 5.2 Codex long-horizon work
15+
- **New model family prompt**: `gpt-5.2-codex_prompt.md` fetched from the latest Codex CLI release with its own cache file.
16+
- **Test coverage**: Added unit tests for GPT 5.2 Codex normalization, family selection, and reasoning behavior.
17+
18+
### Changed
19+
- **Prompt selection alignment**: GPT 5.2 general now uses `gpt_5_2_prompt.md` (Codex CLI parity).
20+
- **Reasoning configuration**: GPT 5.2 Codex supports `xhigh` but does **not** support `"none"`; `"none"` auto-upgrades to `"low"` and `"minimal"` normalizes to `"low"`.
21+
- **Config presets**: `config/full-opencode.json` now includes 22 pre-configured variants (adds GPT 5.2 Codex).
22+
- **Docs**: Updated README/AGENTS/config docs to include GPT 5.2 Codex and new model family behavior.
23+
524
## [4.1.1] - 2025-12-17
625

726
**Minor release**: "none" reasoning effort support, orphaned function_call_output fix, and HTML version update.

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ All contributions MUST:
3636
## Code Standards
3737

3838
- **TypeScript:** All code must be TypeScript with strict type checking
39-
- **Testing:** Include tests for new functionality (we maintain 160+ unit tests)
39+
- **Testing:** Include tests for new functionality (we maintain 200+ unit tests)
4040
- **Documentation:** Update README.md for user-facing changes
4141
- **Modular design:** Keep functions focused and under 40 lines
4242
- **No external dependencies:** Minimize dependencies (currently only @openauthjs/openauth)

README.md

Lines changed: 30 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
3333
## Features
3434

3535
-**ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription
36-
-**18 pre-configured model variants** - GPT 5.2, GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for all reasoning levels
37-
-**GPT 5.2 support** - Latest model with `low/medium/high/xhigh` reasoning levels
36+
-**22 pre-configured model variants** - GPT 5.2, GPT 5.2 Codex, GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for all reasoning levels
37+
-**GPT 5.2 + GPT 5.2 Codex support** - Latest models with `low/medium/high/xhigh` reasoning levels (Codex excludes `none`)
3838
-**Full image input support** - All models configured with multimodal capabilities for reading screenshots, diagrams, and images
3939
- ⚠️ **GPT 5.1+ only** - Older GPT 5.0 models are deprecated and may not work reliably
4040
-**Zero external dependencies** - Lightweight with only @openauthjs/openauth
@@ -46,7 +46,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
4646
-**Automatic tool remapping** - Codex tools → opencode tools
4747
-**Configurable reasoning** - Control effort, summary verbosity, and text output
4848
-**Usage-aware errors** - Shows clear guidance when ChatGPT subscription limits are reached
49-
-**Type-safe & tested** - Strict TypeScript with 193 unit tests + 16 integration tests
49+
-**Type-safe & tested** - Strict TypeScript with 200+ unit tests + integration tests
5050
-**Modular architecture** - Easy to maintain and extend
5151

5252
## Installation
@@ -62,7 +62,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
6262
#### Recommended: Pin the Version
6363

6464
```json
65-
"plugin": ["opencode-openai-codex-auth@4.1.1"]
65+
"plugin": ["opencode-openai-codex-auth@4.2.0"]
6666
```
6767

6868
**Why pin versions?** OpenCode uses Bun's lockfile which pins resolved versions. If you use `"opencode-openai-codex-auth"` without a version, it resolves to "latest" once and **never updates** even when new versions are published.
@@ -76,7 +76,7 @@ Simply change the version in your config and restart OpenCode:
7676
"plugin": ["opencode-openai-codex-auth@3.3.0"]
7777

7878
// To:
79-
"plugin": ["opencode-openai-codex-auth@4.1.1"]
79+
"plugin": ["opencode-openai-codex-auth@4.2.0"]
8080
```
8181

8282
OpenCode will detect the version mismatch and install the new version automatically.
@@ -107,12 +107,12 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
107107

108108
1. **Copy the full configuration** from [`config/full-opencode.json`](./config/full-opencode.json) to your opencode config file.
109109

110-
The config includes 18 models with image input support. Here's a condensed example showing the structure:
110+
The config includes 22 models with image input support. Here's a condensed example showing the structure:
111111

112112
```json
113113
{
114114
"$schema": "https://opencode.ai/config.json",
115-
"plugin": ["opencode-openai-codex-auth@4.1.1"],
115+
"plugin": ["opencode-openai-codex-auth@4.2.0"],
116116
"provider": {
117117
"openai": {
118118
"options": {
@@ -147,7 +147,7 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
147147
"store": false
148148
}
149149
}
150-
// ... 14 more models - see config/full-opencode.json for complete list
150+
// ... 20 more models - see config/full-opencode.json for complete list
151151
}
152152
}
153153
}
@@ -159,8 +159,9 @@ Check [releases](https://github.com/numman-ali/opencode-openai-codex-auth/releas
159159
**Global config**: `~/.config/opencode/opencode.json`
160160
**Project config**: `<project>/.opencode.json`
161161

162-
This gives you 18 model variants with different reasoning levels:
162+
This gives you 22 model variants with different reasoning levels:
163163
- **gpt-5.2** (none/low/medium/high/xhigh) - Latest GPT 5.2 model with full reasoning support
164+
- **gpt-5.2-codex** (low/medium/high/xhigh) - GPT 5.2 Codex presets
164165
- **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets
165166
- **gpt-5.1-codex** (low/medium/high) - Codex model presets
166167
- **gpt-5.1-codex-mini** (medium/high) - Codex mini tier presets
@@ -238,10 +239,15 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
238239

239240
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
240241
|--------------|------------------|-----------------|----------|
242+
| `gpt-5.2-none` | GPT 5.2 None (OAuth) | None | Fastest GPT 5.2 responses (no reasoning) |
241243
| `gpt-5.2-low` | GPT 5.2 Low (OAuth) | Low | Fast GPT 5.2 responses |
242244
| `gpt-5.2-medium` | GPT 5.2 Medium (OAuth) | Medium | Balanced GPT 5.2 tasks |
243245
| `gpt-5.2-high` | GPT 5.2 High (OAuth) | High | Complex GPT 5.2 reasoning |
244246
| `gpt-5.2-xhigh` | GPT 5.2 Extra High (OAuth) | xHigh | Deep GPT 5.2 analysis |
247+
| `gpt-5.2-codex-low` | GPT 5.2 Codex Low (OAuth) | Low | Fast GPT 5.2 Codex responses |
248+
| `gpt-5.2-codex-medium` | GPT 5.2 Codex Medium (OAuth) | Medium | Balanced GPT 5.2 Codex coding tasks |
249+
| `gpt-5.2-codex-high` | GPT 5.2 Codex High (OAuth) | High | Complex GPT 5.2 Codex reasoning & tools |
250+
| `gpt-5.2-codex-xhigh` | GPT 5.2 Codex Extra High (OAuth) | xHigh | Deep GPT 5.2 Codex long-horizon work |
245251
| `gpt-5.1-codex-max-low` | GPT 5.1 Codex Max Low (OAuth) | Low | Fast exploratory large-context work |
246252
| `gpt-5.1-codex-max-medium` | GPT 5.1 Codex Max Medium (OAuth) | Medium | Balanced large-context builds |
247253
| `gpt-5.1-codex-max-high` | GPT 5.1 Codex Max High (OAuth) | High | Long-horizon builds, large refactors |
@@ -251,6 +257,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
251257
| `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
252258
| `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Lightweight Codex mini tier |
253259
| `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
260+
| `gpt-5.1-none` | GPT 5.1 None (OAuth) | None | Fastest GPT 5.1 responses (no reasoning) |
254261
| `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning |
255262
| `gpt-5.1-medium` | GPT 5.1 Medium (OAuth) | Medium | Balanced general-purpose tasks |
256263
| `gpt-5.1-high` | GPT 5.1 High (OAuth) | High | Deep reasoning, complex problems |
@@ -260,7 +267,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
260267

261268
> **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output).
262269
>
263-
> **Note**: GPT 5.2 and Codex Max both support `xhigh` reasoning. Use explicit reasoning levels (e.g., `gpt-5.2-high`, `gpt-5.1-codex-max-xhigh`) for precise control.
270+
> **Note**: GPT 5.2, GPT 5.2 Codex, and Codex Max all support `xhigh` reasoning. Use explicit reasoning levels (e.g., `gpt-5.2-high`, `gpt-5.2-codex-xhigh`, `gpt-5.1-codex-max-xhigh`) for precise control.
264271
265272
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results.
266273
@@ -296,16 +303,16 @@ When no configuration is specified, the plugin uses these defaults for all GPT-5
296303
- **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity
297304
- **`textVerbosity: "medium"`** - Balanced output length
298305

299-
Codex Max defaults to `reasoningEffort: "high"` when selected, while other families default to `medium`.
306+
Codex Max, GPT 5.2, and GPT 5.2 Codex default to `reasoningEffort: "high"` when selected, while other families default to `medium`.
300307

301-
These defaults match the official Codex CLI behavior and can be customized (see Configuration below).
308+
These defaults are tuned for Codex CLI-style usage and can be customized (see Configuration below).
302309

303310
## Configuration
304311

305312
### ⚠️ REQUIRED: Use Pre-Configured File
306313

307314
**YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration:
308-
- 18 pre-configured model variants (GPT 5.2, GPT 5.1, Codex, Codex Max, Codex Mini)
315+
- 22 pre-configured model variants (GPT 5.2, GPT 5.2 Codex, GPT 5.1, Codex, Codex Max, Codex Mini)
309316
- Image input support enabled for all models
310317
- Optimal configuration for each reasoning level
311318
- All variants visible in the opencode model selector
@@ -323,17 +330,17 @@ If you want to customize settings yourself, you can configure options at provide
323330

324331
⚠️ **Important**: Families have different supported values.
325332

326-
| Setting | GPT-5.2 Values | GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default |
327-
|---------|---------------|----------------|----------------------|---------------------------|----------------|
328-
| `reasoningEffort` | `none`, `low`, `medium`, `high`, `xhigh` | `none`, `low`, `medium`, `high` | `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` for Codex Max/5.2 |
329-
| `reasoningSummary` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed`, `off`, `on` | `auto` |
330-
| `textVerbosity` | `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` or `high` | `medium` or `high` | `medium` |
331-
| `include` | Array of strings | Array of strings | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |
333+
| Setting | GPT-5.2 Values | GPT-5.2-Codex Values | GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default |
334+
|---------|---------------|----------------------|----------------|----------------------|---------------------------|----------------|
335+
| `reasoningEffort` | `none`, `low`, `medium`, `high`, `xhigh` | `low`, `medium`, `high`, `xhigh` | `none`, `low`, `medium`, `high` | `low`, `medium`, `high` | `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` for Codex Max/5.2/5.2 Codex |
336+
| `reasoningSummary` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed`, `off`, `on` | `auto` |
337+
| `textVerbosity` | `low`, `medium`, `high` | `medium` or `high` | `low`, `medium`, `high` | `medium` or `high` | `medium` or `high` | `medium` |
338+
| `include` | Array of strings | Array of strings | Array of strings | Array of strings | Array of strings | `["reasoning.encrypted_content"]` |
332339

333340
> **Notes**:
334341
> - GPT 5.2 and GPT 5.1 (general purpose) support `none` reasoning per OpenAI API docs.
335-
> - `none` is NOT supported for Codex variants - auto-converts to `low` for Codex/Codex Max, or `medium` for Codex Mini.
336-
> - GPT 5.2 and Codex Max support `xhigh` reasoning.
342+
> - `none` is NOT supported for Codex variants (including GPT 5.2 Codex) - auto-converts to `low` for Codex/Codex Max, or `medium` for Codex Mini.
343+
> - GPT 5.2, GPT 5.2 Codex, and Codex Max support `xhigh` reasoning.
337344
> - `minimal` effort is auto-normalized to `low` for Codex models.
338345
> - Codex Mini clamps to `medium`/`high`; `xhigh` downgrades to `high`.
339346
> - All models have `modalities.input: ["text", "image"]` enabled for multimodal support.
@@ -345,7 +352,7 @@ Apply settings to all models:
345352
```json
346353
{
347354
"$schema": "https://opencode.ai/config.json",
348-
"plugin": ["opencode-openai-codex-auth@4.1.1"],
355+
"plugin": ["opencode-openai-codex-auth@4.2.0"],
349356
"model": "openai/gpt-5-codex",
350357
"provider": {
351358
"openai": {
@@ -365,7 +372,7 @@ Create your own named variants in the model selector:
365372
```json
366373
{
367374
"$schema": "https://opencode.ai/config.json",
368-
"plugin": ["opencode-openai-codex-auth@4.1.1"],
375+
"plugin": ["opencode-openai-codex-auth@4.2.0"],
369376
"provider": {
370377
"openai": {
371378
"models": {

config/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,13 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json
1414

1515
**Why this is required:**
1616
- GPT 5 models can be temperamental and need proper configuration
17-
- Contains 12+ verified GPT 5.1 model variants (Codex, Codex Max, Codex Mini, and general GPT 5.1 including `gpt-5.1-codex-max-low/medium/high/xhigh`)
17+
- Contains 22 verified GPT 5.2/5.1 model variants (GPT 5.2, GPT 5.2 Codex, Codex, Codex Max, Codex Mini, and general GPT 5.1 including `gpt-5.1-codex-max-low/medium/high/xhigh`)
1818
- Includes all required metadata for OpenCode features
1919
- Guaranteed to work reliably
2020
- Global options for all models + per-model configuration overrides
2121

2222
**What's included:**
23-
- All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
23+
- All supported GPT 5.2/5.1 variants: gpt-5.2, gpt-5.2-codex, gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
2424
- Proper reasoning effort settings for each variant (including new `xhigh` for Codex Max)
2525
- Context limits (272k context / 128k output for all Codex families, including Codex Max)
2626
- Required options: `store: false`, `include: ["reasoning.encrypted_content"]`

0 commit comments

Comments
 (0)