Skip to content

Commit fd2cd82

Browse files
authored
feat(consult,debate): update Gemini 3.1 and Codex model defaults (#234) (#244)
* feat(consult,debate): update Gemini 3.1 as default for high effort tier (#234) Update the Gemini model default for the `high` effort tier from `gemini-3-pro-preview` to `gemini-3.1-pro-preview` across all consult and debate configuration files. The `max` tier already uses `gemini-3.1-pro-preview` and is unchanged. Updated across 3 platforms (Claude Code plugins, OpenCode adapter, Codex adapter) in skill files, command files, and README. * fix(consult,debate): address review findings from gemini-3.1 update - Update Copilot picker labels from gemini-3-pro to gemini-3.1-pro in plugins/consult/commands/consult.md, adapters/opencode/commands/consult.md, adapters/codex/skills/consult/SKILL.md - Add gemini-3.1-pro-preview to expectedModels assertion in debate-command.test.js to catch regressions - Add gemini high-effort model assertion in debate-command.test.js for consult skill adapter sync - Update docs/consult-command-test-strategy.md stale model references * fix(consult,debate): update stale Codex and Gemini low-tier model defaults - Codex: replace o4-mini/o3 with gpt-5.3-codex across all effort tiers in consult and debate skill files, command files, and adapters - Gemini low tier: replace gemini-2.5-flash with gemini-3-flash-preview (now consistent: low=gemini-3-flash-preview, medium=gemini-3-flash-preview, high/max=gemini-3.1-pro-preview) - Update model picker label for Gemini flash in consult command files - Update README, top picks, and test strategy doc - Fix debate-command.test.js expectedModels and consult adapter sync assertions to reflect current model names (remove o4-mini/o3/gemini-2.5-flash, add gpt-5.3-codex/gemini-3-flash-preview/gemini-3.1-pro-preview) * fix(consult,debate): use full gemini-3.1-pro-preview API name consistently - Update picker labels and example invocations from 'gemini-3.1-pro' to 'gemini-3.1-pro-preview' to match the effort table API model name (plugins/consult/commands, adapters/opencode/commands/consult, adapters/codex/skills/consult) - Fix debate state-schema JSON examples in plugins/debate/skills and adapters/opencode/skills/debate to use 'gemini-3.1-pro-preview' - Update docs/consult-command-test-strategy.md to use full preview name - Strengthen test regression guard to cover both high and max rows * docs: add CHANGELOG entry for gemini-3.1 model defaults update (#234)
1 parent beb6da8 commit fd2cd82

File tree

14 files changed

+86
-77
lines changed

14 files changed

+86
-77
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
2121

2222
- **`/debate` External Tool Quick Reference** — Added a "External Tool Quick Reference" section to all copies of the debate skill (`plugins/debate/skills/debate/SKILL.md`, OpenCode and Codex adapters) with safe command patterns, effort-to-model mapping tables, and output parsing expressions. The section includes a canonical-source pointer to `plugins/consult/skills/consult/SKILL.md` so the debate orchestrator doesn't duplicate provider logic. Added pointer notes in `debate-orchestrator` agents. Fixes issue #232.
2323

24+
- **`/consult` and `/debate` model defaults update** — Gemini high/max effort now uses `gemini-3.1-pro-preview`; Gemini low/medium uses `gemini-3-flash-preview`. Codex uses `gpt-5.3-codex` for all effort tiers. Updated across all platforms: Claude Code plugin, OpenCode adapter, and Codex adapter for both consult and debate skills and commands. Fixes issue #234.
25+
2426
- **`/consult` model name updates** — Updated stale model names in the consult skill: Codex models are now `o4-mini` (low/medium) and `o3` (high/max); Gemini models include `gemini-3-flash-preview`, `gemini-3-pro-preview`, and `gemini-3.1-pro-preview`. Synced to OpenCode adapter consult skill. Fixes issue #232.
2527

2628
- **`/next-task` Phase 12 ship invocation** — Phase 12 now invokes `ship:ship` via `await Skill({ name: "ship:ship", args: ... })` instead of `Task({ subagent_type: "ship:ship", ... })`. `ship:ship` is a skill, not an agent; the previous `Task()` call silently failed, leaving the workflow stuck after delivery validation with no PR created. The Codex adapter is updated in parity and regression tests are added. Fixes issue #230.

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -651,8 +651,8 @@ agent-knowledge/
651651
| Tool | Default Model (high) | Reasoning Control |
652652
|------|---------------------|-------------------|
653653
| Claude | claude-opus-4-6 | max-turns |
654-
| Gemini | gemini-3-pro-preview | built-in |
655-
| Codex | o3 | model_reasoning_effort |
654+
| Gemini | gemini-3.1-pro-preview | built-in |
655+
| Codex | gpt-5.3-codex | model_reasoning_effort |
656656
| OpenCode | (user-selected or default) | --variant |
657657
| Copilot | (default) | none |
658658

__tests__/debate-command.test.js

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -696,7 +696,7 @@ describe('external tool quick reference (#232)', () => {
696696
});
697697

698698
test('current model names present in effort-to-model mapping of each skill copy', () => {
699-
const expectedModels = ['claude-haiku-4-5', 'claude-sonnet-4-6', 'claude-opus-4-6', 'o4-mini', 'o3', 'gemini-2.5-flash'];
699+
const expectedModels = ['claude-haiku-4-5', 'claude-sonnet-4-6', 'claude-opus-4-6', 'gpt-5.3-codex', 'gemini-3-flash-preview', 'gemini-3.1-pro-preview'];
700700
for (const content of allDebateSkillContents()) {
701701
for (const model of expectedModels) {
702702
expect(content).toMatch(new RegExp(`Effort-to-Model Mapping[\\s\\S]*${model}`));
@@ -719,19 +719,26 @@ describe('consult skill opencode adapter sync (#232)', () => {
719719
expect(openCodeConsultSkillContent).toContain('claude-opus-4-6');
720720
});
721721

722-
test('opencode consult adapter has updated codex model names (no speculative gpt-5.x)', () => {
723-
expect(openCodeConsultSkillContent).not.toContain('gpt-5.3-codex');
724-
expect(openCodeConsultSkillContent).not.toContain('gpt-5.2-codex');
725-
expect(openCodeConsultSkillContent).toContain('o4-mini');
726-
expect(openCodeConsultSkillContent).toContain('o3');
722+
test('opencode consult adapter has updated codex model names', () => {
723+
expect(openCodeConsultSkillContent).toContain('gpt-5.3-codex');
724+
expect(openCodeConsultSkillContent).not.toContain('o4-mini');
725+
expect(openCodeConsultSkillContent).not.toMatch(/\|\s*(?:low|medium|high|max)\s*\|\s*o3\s*\|/);
727726
});
728727

729728
test('canonical consult skill has updated model names', () => {
730729
expect(consultSkillContent).toContain('claude-haiku-4-5');
731730
expect(consultSkillContent).toContain('claude-sonnet-4-6');
732731
expect(consultSkillContent).toContain('claude-opus-4-6');
733-
expect(consultSkillContent).not.toContain('gpt-5.3-codex');
734-
expect(consultSkillContent).toContain('o4-mini');
735-
expect(consultSkillContent).toContain('o3');
732+
expect(consultSkillContent).toContain('gpt-5.3-codex');
733+
expect(consultSkillContent).not.toContain('o4-mini');
734+
expect(consultSkillContent).not.toMatch(/\|\s*(?:low|medium|high|max)\s*\|\s*o3\s*\|/);
735+
});
736+
737+
test('consult skill uses gemini-3.1-pro-preview as high-effort Gemini default (#234)', () => {
738+
expect(consultSkillContent).toContain('gemini-3.1-pro-preview');
739+
expect(openCodeConsultSkillContent).toContain('gemini-3.1-pro-preview');
740+
// Ensure old model is not used as high/max default (may still appear in the models list)
741+
expect(consultSkillContent).not.toMatch(/\|\s*(?:high|max)\s*\|\s*gemini-3-pro-preview/);
742+
expect(openCodeConsultSkillContent).not.toMatch(/\|\s*(?:high|max)\s*\|\s*gemini-3-pro-preview/);
736743
});
737744
});

adapters/codex/skills/consult/SKILL.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -169,8 +169,8 @@ request_user_input:
169169
- header: "Model"
170170
question: "Which Gemini model?"
171171
options:
172-
- label: "gemini-3-pro" description: "Most capable, strong reasoning"
173-
- label: "gemini-3-flash" description: "Fast, 78% SWE-bench"
172+
- label: "gemini-3.1-pro-preview" description: "Most capable, strong reasoning"
173+
- label: "gemini-3-flash-preview" description: "Fast, efficient coding"
174174
- label: "gemini-2.5-pro" description: "Previous gen pro model"
175175
- label: "gemini-2.5-flash" description: "Previous gen flash model"
176176
```
@@ -214,7 +214,7 @@ request_user_input:
214214
- label: "claude-sonnet-4-5" description: "Default Copilot model"
215215
- label: "claude-opus-4-6" description: "Most capable Claude model"
216216
- label: "gpt-5.3-codex" description: "OpenAI GPT-5.3 Codex"
217-
- label: "gemini-3-pro" description: "Google Gemini 3 Pro"
217+
- label: "gemini-3.1-pro-preview" description: "Google Gemini 3.1 Pro"
218218
```
219219

220220
Map the user's choice to the model string (strip " (Recommended)" suffix if present).
@@ -233,7 +233,7 @@ Invoke the `consult` skill directly using the Skill tool:
233233
Skill: consult
234234
Args: "[question]" --tool=[tool] --effort=[effort] --model=[model] [--context=[context]] [--continue=[session_id]]
235235
236-
Example: "Is this the right approach?" --tool=gemini --effort=high --model=gemini-3-pro
236+
Example: "Is this the right approach?" --tool=gemini --effort=high --model=gemini-3.1-pro-preview
237237
```
238238

239239
The skill handles the full consultation lifecycle: model resolution, command building, context packaging, execution with 120s timeout, and returns a plain JSON result.

adapters/codex/skills/debate/SKILL.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -289,10 +289,10 @@ Read the consult skill file to get the exact patterns and replacements.
289289

290290
| Effort | Claude | Gemini | Codex | OpenCode | Copilot |
291291
|--------|--------|--------|-------|----------|---------|
292-
| low | claude-haiku-4-5 (1 turn) | gemini-2.5-flash | o4-mini (low) | default (low) | no control |
293-
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | o4-mini (medium) | default (medium) | no control |
294-
| high | claude-opus-4-6 (5 turns) | gemini-3-pro-preview | o3 (high) | default (high) | no control |
295-
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | o3 (high) | default + --thinking | no control |
292+
| low | claude-haiku-4-5 (1 turn) | gemini-3-flash-preview | gpt-5.3-codex (low) | default (low) | no control |
293+
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | gpt-5.3-codex (medium) | default (medium) | no control |
294+
| high | claude-opus-4-6 (5 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default (high) | no control |
295+
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default + --thinking | no control |
296296

297297
### Output Parsing
298298

adapters/opencode/commands/consult.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -177,8 +177,8 @@ AskUserQuestion:
177177
question: "Which Gemini model?"
178178
multiSelect: false
179179
options:
180-
- label: "gemini-3-pro" description: "Most capable, strong reasoning"
181-
- label: "gemini-3-flash" description: "Fast, 78% SWE-bench"
180+
- label: "gemini-3.1-pro-preview" description: "Most capable, strong reasoning"
181+
- label: "gemini-3-flash-preview" description: "Fast, efficient coding"
182182
- label: "gemini-2.5-pro" description: "Previous gen pro model"
183183
- label: "gemini-2.5-flash" description: "Previous gen flash model"
184184
```
@@ -222,7 +222,7 @@ AskUserQuestion:
222222
- label: "claude-sonnet-4-5" description: "Default Copilot model"
223223
- label: "claude-opus-4-6" description: "Most capable Claude model"
224224
- label: "gpt-5.3-codex" description: "OpenAI GPT-5.3 Codex"
225-
- label: "gemini-3-pro" description: "Google Gemini 3 Pro"
225+
- label: "gemini-3.1-pro-preview" description: "Google Gemini 3.1 Pro"
226226
```
227227

228228
Map the user's choice to the model string (strip " (Recommended)" suffix if present).
@@ -241,7 +241,7 @@ Invoke the `consult` skill directly using the Skill tool:
241241
Skill: consult
242242
Args: "[question]" --tool=[tool] --effort=[effort] --model=[model] [--context=[context]] [--continue=[session_id]]
243243
244-
Example: "Is this the right approach?" --tool=gemini --effort=high --model=gemini-3-pro
244+
Example: "Is this the right approach?" --tool=gemini --effort=high --model=gemini-3.1-pro-preview
245245
```
246246

247247
The skill handles the full consultation lifecycle: model resolution, command building, context packaging, execution with 120s timeout, and returns a plain JSON result.

adapters/opencode/commands/debate.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -293,10 +293,10 @@ Read the consult skill file to get the exact patterns and replacements.
293293

294294
| Effort | Claude | Gemini | Codex | OpenCode | Copilot |
295295
|--------|--------|--------|-------|----------|---------|
296-
| low | claude-haiku-4-5 (1 turn) | gemini-2.5-flash | o4-mini (low) | default (low) | no control |
297-
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | o4-mini (medium) | default (medium) | no control |
298-
| high | claude-opus-4-6 (5 turns) | gemini-3-pro-preview | o3 (high) | default (high) | no control |
299-
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | o3 (high) | default + --thinking | no control |
296+
| low | claude-haiku-4-5 (1 turn) | gemini-3-flash-preview | gpt-5.3-codex (low) | default (low) | no control |
297+
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | gpt-5.3-codex (medium) | default (medium) | no control |
298+
| high | claude-opus-4-6 (5 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default (high) | no control |
299+
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default + --thinking | no control |
300300

301301
### Output Parsing
302302

adapters/opencode/skills/consult/SKILL.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,9 @@ Models: gemini-2.5-flash, gemini-2.5-pro, gemini-3-flash-preview, gemini-3-pro-p
7070

7171
| Effort | Model |
7272
|--------|-------|
73-
| low | gemini-2.5-flash |
73+
| low | gemini-3-flash-preview |
7474
| medium | gemini-3-flash-preview |
75-
| high | gemini-3-pro-preview |
75+
| high | gemini-3.1-pro-preview |
7676
| max | gemini-3.1-pro-preview |
7777

7878
**Parse output**: `JSON.parse(stdout).response`
@@ -89,14 +89,14 @@ Session resume (latest): codex exec resume --last "QUESTION" --json
8989

9090
Note: `codex exec` is the non-interactive/headless mode. There is no `-q` flag. The TUI mode is `codex` (no subcommand).
9191

92-
Models: o4-mini, o3
92+
Models: gpt-5.3-codex
9393

9494
| Effort | Model | Reasoning |
9595
|--------|-------|-----------|
96-
| low | o4-mini | low |
97-
| medium | o4-mini | medium |
98-
| high | o3 | high |
99-
| max | o3 | high |
96+
| low | gpt-5.3-codex | low |
97+
| medium | gpt-5.3-codex | medium |
98+
| high | gpt-5.3-codex | high |
99+
| max | gpt-5.3-codex | high |
100100

101101
**Parse output**: `JSON.parse(stdout).message` or raw text
102102
**Session ID**: Codex prints a resume hint at session end (e.g., `codex resume SESSION_ID`). Extract the session ID from stdout or from `JSON.parse(stdout).session_id` if available.
@@ -110,7 +110,7 @@ Session resume: opencode run "QUESTION" --format json --model "MODEL" --variant
110110
With thinking: add --thinking flag
111111
```
112112

113-
Models: 75+ via providers (format: provider/model). Top picks: claude-sonnet-4-6, claude-opus-4-6, gpt-5.2, o3, gemini-3-pro-preview, minimax-m2.1
113+
Models: 75+ via providers (format: provider/model). Top picks: claude-sonnet-4-6, claude-opus-4-6, gpt-5.3-codex, gemini-3.1-pro-preview, minimax-m2.1
114114

115115
| Effort | Model | Variant |
116116
|--------|-------|---------|
@@ -277,7 +277,7 @@ Return a plain JSON object to stdout (no markers or wrappers):
277277
```json
278278
{
279279
"tool": "gemini",
280-
"model": "gemini-3-pro-preview",
280+
"model": "gemini-3.1-pro-preview",
281281
"effort": "high",
282282
"duration_ms": 12300,
283283
"response": "The AI's response text here...",
@@ -315,4 +315,4 @@ This skill is invoked by:
315315
- `consult-agent` for `/consult` command
316316
- Direct invocation: `Skill('consult', '"question" --tool=gemini --effort=high')`
317317

318-
Example: `Skill('consult', '"Is this approach correct?" --tool=gemini --effort=high --model=gemini-3-pro-preview')`
318+
Example: `Skill('consult', '"Is this approach correct?" --tool=gemini --effort=high --model=gemini-3.1-pro-preview')`

adapters/opencode/skills/debate/SKILL.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -222,7 +222,7 @@ Save to `{AI_STATE_DIR}/debate/last-debate.json`:
222222
"id": "debate-{ISO timestamp}-{4 char random hex}",
223223
"topic": "original topic text",
224224
"proposer": {"tool": "claude", "model": "opus"},
225-
"challenger": {"tool": "gemini", "model": "gemini-3-pro"},
225+
"challenger": {"tool": "gemini", "model": "gemini-3.1-pro-preview"},
226226
"effort": "high",
227227
"rounds_completed": 2,
228228
"max_rounds": 2,
@@ -277,10 +277,10 @@ Platform state directory:
277277

278278
| Effort | Claude | Gemini | Codex | OpenCode | Copilot |
279279
|--------|--------|--------|-------|----------|---------|
280-
| low | claude-haiku-4-5 (1 turn) | gemini-2.5-flash | o4-mini (low) | default (low) | no control |
281-
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | o4-mini (medium) | default (medium) | no control |
282-
| high | claude-opus-4-6 (5 turns) | gemini-3-pro-preview | o3 (high) | default (high) | no control |
283-
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | o3 (high) | default + --thinking | no control |
280+
| low | claude-haiku-4-5 (1 turn) | gemini-3-flash-preview | gpt-5.3-codex (low) | default (low) | no control |
281+
| medium | claude-sonnet-4-6 (3 turns) | gemini-3-flash-preview | gpt-5.3-codex (medium) | default (medium) | no control |
282+
| high | claude-opus-4-6 (5 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default (high) | no control |
283+
| max | claude-opus-4-6 (10 turns) | gemini-3.1-pro-preview | gpt-5.3-codex (high) | default + --thinking | no control |
284284

285285
### Output Parsing
286286

docs/consult-command-test-strategy.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -171,10 +171,10 @@ describe('Model Selection', () => {
171171

172172
describe('Gemini models', () => {
173173
it('should map effort levels correctly', () => {
174-
expect(getGeminiModel('low')).toBe('gemini-2.5-flash');
175-
expect(getGeminiModel('medium')).toBe('gemini-3-flash');
176-
expect(getGeminiModel('high')).toBe('gemini-3-pro');
177-
expect(getGeminiModel('max')).toBe('gemini-3-pro');
174+
expect(getGeminiModel('low')).toBe('gemini-3-flash-preview');
175+
expect(getGeminiModel('medium')).toBe('gemini-3-flash-preview');
176+
expect(getGeminiModel('high')).toBe('gemini-3.1-pro-preview');
177+
expect(getGeminiModel('max')).toBe('gemini-3.1-pro-preview');
178178
});
179179
});
180180

@@ -244,7 +244,7 @@ describe('Session Management', () => {
244244
it('should include question in saved session', () => {
245245
const session = {
246246
tool: 'gemini',
247-
model: 'gemini-3-pro',
247+
model: 'gemini-3.1-pro-preview',
248248
effort: 'medium',
249249
session_id: 'xyz-789',
250250
timestamp: new Date().toISOString(),
@@ -458,7 +458,7 @@ describe('Session Continuation', () => {
458458
it('should restore tool from saved session', () => {
459459
const session = {
460460
tool: 'gemini',
461-
model: 'gemini-3-pro',
461+
model: 'gemini-3.1-pro-preview',
462462
effort: 'medium',
463463
session_id: 'session-456',
464464
timestamp: new Date().toISOString(),
@@ -672,18 +672,18 @@ describe('Command Building', () => {
672672

673673
describe('Gemini Command', () => {
674674
it('should build basic command', () => {
675-
const { command, flags } = buildGeminiCommand('question', 'gemini-3-pro');
675+
const { command, flags } = buildGeminiCommand('question', 'gemini-3.1-pro-preview');
676676
expect(command).toBe('gemini');
677677
expect(flags).toContain('-p');
678678
expect(flags).toContain('"question"');
679679
expect(flags).toContain('--output-format');
680680
expect(flags).toContain('json');
681681
expect(flags).toContain('-m');
682-
expect(flags).toContain('gemini-3-pro');
682+
expect(flags).toContain('gemini-3.1-pro-preview');
683683
});
684684

685685
it('should append session resume for continuation', () => {
686-
const { flags } = buildGeminiCommand('question', 'gemini-3-pro', 'session-456', true);
686+
const { flags } = buildGeminiCommand('question', 'gemini-3.1-pro-preview', 'session-456', true);
687687
expect(flags).toContain('--resume');
688688
expect(flags).toContain('session-456');
689689
});
@@ -939,7 +939,7 @@ describe('Full Consultation Flow', () => {
939939
jest.spyOn(fs, 'readFileSync').mockReturnValueOnce(JSON.stringify({
940940
tool: 'gemini',
941941
session_id: 'session-456',
942-
model: 'gemini-3-pro',
942+
model: 'gemini-3.1-pro-preview',
943943
effort: 'medium',
944944
timestamp: new Date().toISOString(),
945945
question: 'continue',
@@ -1139,7 +1139,7 @@ describe('Mocked Tool Outputs', () => {
11391139
const mockGeminiOutput = `=== CONSULT_RESULT ===
11401140
{
11411141
"tool": "gemini",
1142-
"model": "gemini-3-pro",
1142+
"model": "gemini-3.1-pro-preview",
11431143
"effort": "medium",
11441144
"duration_ms": 23400,
11451145
"response": "Based on my analysis, the approach seems sound but could benefit from error handling for edge cases.",
@@ -1175,7 +1175,7 @@ describe('Mocked Tool Outputs', () => {
11751175
it('should parse structured output correctly', () => {
11761176
const result = parseMockOutput(mockGeminiOutput, 'gemini');
11771177
expect(result.tool).toBe('gemini');
1178-
expect(result.model).toBe('gemini-3-pro');
1178+
expect(result.model).toBe('gemini-3.1-pro-preview');
11791179
expect(result.duration_ms).toBe(23400);
11801180
expect(result.session_id).toBe('session-xyz-789');
11811181
});

0 commit comments

Comments
 (0)