Skip to content

Commit c84aefa

Browse files
authored
feat(ai-gateway): Add a how-to for ucing Codex CLI with AI Gateway (#3395)
1 parent d52c685 commit c84aefa

File tree

3 files changed

+291
-0
lines changed

3 files changed

+291
-0
lines changed
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
name: codex-route
2+
paths:
3+
- /codex
4+
service:
5+
name: codex-service
6+
7+
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
name: codex-service
2+
url: http://localhost
Lines changed: 282 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,282 @@
1+
---
2+
title: Route OpenAI Codex CLI traffic through Kong AI Gateway
3+
content_type: how_to
4+
related_resources:
5+
- text: AI Gateway
6+
url: /ai-gateway/
7+
- text: AI Proxy Advanced
8+
url: /plugins/ai-proxy-advanced/
9+
- text: AI Request Transformer
10+
url: /plugins/ai-request-transformer/
11+
- text: File Log
12+
url: /plugins/file-log/
13+
14+
description: Configure AI Gateway to proxy OpenAI Codex CLI traffic using AI Proxy Advanced.
15+
16+
products:
17+
- gateway
18+
- ai-gateway
19+
20+
works_on:
21+
- on-prem
22+
- konnect
23+
24+
min_version:
25+
gateway: '3.6'
26+
27+
plugins:
28+
- ai-proxy-advanced
29+
- ai-request-transformer
30+
- file-log
31+
32+
entities:
33+
- service
34+
- route
35+
- plugin
36+
37+
tags:
38+
- ai
39+
- openai
40+
41+
tldr:
42+
q: How do I run OpenAI Codex CLI through Kong AI Gateway?
43+
a: Create a Gateway Service and Route, attach AI Proxy Advanced to forward requests to OpenAI, add a Request Transformer plugin to normalize upstream paths, enable file-log to inspect traffic, and point Codex CLI to the local proxy endpoint so all LLM requests go through the Gateway for monitoring and control.
44+
45+
tools:
46+
- deck
47+
48+
prereqs:
49+
inline:
50+
- title: OpenAI
51+
include_content: prereqs/openai
52+
icon_url: /assets/icons/openai.svg
53+
- title: Codex CLI
54+
icon_url: /assets/icons/openai.svg
55+
content: |
56+
This tutorial uses the OpenAI Codex CLI. Install Node.js 18+ if needed (verify with `node --version`), then install and launch Codex:
57+
58+
1. Run the following command in your terminal to install the Codex CLI:
59+
60+
```sh
61+
npm install -g @openai/codex
62+
```
63+
64+
2. Once the installation process is complete, run the following command:
65+
66+
```sh
67+
codex
68+
```
69+
3. The CLI will prompt you to authenticate in your browser using your OpenAI account.
70+
71+
4. Once authenticated, close the Codex CLI session by hitting <kbd>ctrl</kbd> + <kbd>c</kbd> on macOS or <kbd>ctrl</kbd> + <kbd>break</kbd> on Windows.
72+
entities:
73+
services:
74+
- codex-service
75+
routes:
76+
- codex-route
77+
78+
cleanup:
79+
inline:
80+
- title: Clean up Konnect environment
81+
include_content: cleanup/platform/konnect
82+
icon_url: /assets/icons/gateway.svg
83+
- title: Destroy the {{site.base_gateway}} container
84+
include_content: cleanup/products/gateway
85+
icon_url: /assets/icons/gateway.svg
86+
---
87+
## Configure the AI Proxy Advanced plugin
88+
89+
First, let's configure the AI Proxy Advanced plugin. In this setup, we use the Responses route because the Codex CLI calls it by default. We don't hard-code a model in the plugin — Codex sends the model in each request. We also raise the body size limit to 128 KB to support larger prompts.
90+
91+
{% entity_examples %}
92+
entities:
93+
plugins:
94+
- name: ai-proxy-advanced
95+
service: codex-service
96+
config:
97+
genai_category: text/generation
98+
llm_format: openai
99+
max_request_body_size: 131072
100+
model_name_header: true
101+
response_streaming: allow
102+
balancer:
103+
algorithm: "round-robin"
104+
tokens_count_strategy: "total-tokens"
105+
latency_strategy: "tpot"
106+
retries: 3
107+
targets:
108+
- route_type: llm/v1/responses
109+
auth:
110+
header_name: Authorization
111+
header_value: Bearer ${openai_api_key}
112+
logging:
113+
log_payloads: false
114+
log_statistics: true
115+
model:
116+
provider: "openai"
117+
118+
variables:
119+
openai_api_key:
120+
value: $OPENAI_API_KEY
121+
{% endentity_examples %}
122+
123+
124+
## Configure the Request Transformer plugin
125+
126+
To ensure that Codex forwards clean, predictable requests to OpenAI, we configure a [Request Transformer](/plugins/request-transformer/) plugin. This plugin normalizes the upstream URI and removes any extra path segments, so only the expected route reaches the OpenAI endpoint. This small guardrail avoids malformed paths and keeps the proxy behavior consistent.
127+
128+
{% entity_examples %}
129+
entities:
130+
plugins:
131+
- name: request-transformer
132+
service: codex-service
133+
config:
134+
replace:
135+
uri: "/"
136+
{% endentity_examples %}
137+
138+
139+
Now, we can pre-validate our current configuration:
140+
141+
142+
{% validation request-check %}
143+
url: /codex
144+
status_code: 200
145+
method: POST
146+
headers:
147+
- 'Content-Type: application/json'
148+
body:
149+
model: gpt-4o
150+
input:
151+
- role: "user"
152+
content: "Ping"
153+
{% endvalidation %}
154+
155+
## Export environment variables
156+
157+
Now, let's open a new terminal window and export the variables that the Codex CLI will use. We set a dummy API key here just to confirm the variable exists, and point `OPENAI_BASE_URL` to the local proxy endpoint where we will route LLM traffic from Codex CLI:
158+
159+
```sh
160+
export OPENAI_API_KEY=sk-xxx
161+
export OPENAI_BASE_URL=http://localhost:8000/codex
162+
```
163+
{: data-deployment-topology="on-prem" }
164+
165+
```sh
166+
export OPENAI_API_KEY=sk-xxx
167+
export OPENAI_BASE_URL=$KONNECT_PROXY_URL/codex
168+
```
169+
{: data-deployment-topology="konnect" }
170+
171+
## Configure the File Log plugin
172+
173+
Finally, to see the exact payloads traveling between Codex and the AI Gateway, let's attach a File Log plugin to the service. This gives us a local log file so we can inspect requests and responses as Codex runs through Kong.
174+
175+
{% entity_examples %}
176+
entities:
177+
plugins:
178+
- name: file-log
179+
service: codex-service
180+
config:
181+
path: "/tmp/file.json"
182+
{% endentity_examples %}
183+
184+
185+
## Start and use Codex CLI
186+
187+
Let's test our Codex CLI set up now:
188+
189+
1. In the terminal where you exported your environment variables, run:
190+
191+
```sh
192+
codex
193+
```
194+
195+
You should see:
196+
197+
```text
198+
╭───────────────────────────────────────────╮
199+
│ >_ OpenAI Codex (v0.55.0) │
200+
│ │
201+
│ model: gpt-5-codex /model to change │
202+
│ directory: ~ │
203+
╰───────────────────────────────────────────╯
204+
205+
To get started, describe a task or try one of these commands:
206+
207+
/init - create an AGENTS.md file with instructions for Codex
208+
/status - show current session configuration
209+
/approvals - choose what Codex can do without approval
210+
/model - choose what model and reasoning effort to use
211+
/review - review any changes and find issues
212+
```
213+
{.no-copy-code}
214+
215+
1. Run a simple command to call Codex using the gpt-4o model:
216+
217+
```sh
218+
codex exec --model gpt-4o "Hello"
219+
```
220+
221+
Codex will prompt:
222+
223+
```text
224+
Would you like to run the following command?
225+
226+
Reason: Need temporary network access so codex exec can reach the OpenAI API
227+
228+
$ codex exec --model gpt-4o "Hello"
229+
230+
› 1. Yes, proceed
231+
2. Yes, and don't ask again for this command
232+
3. No, and tell Codex what to do differently
233+
```
234+
{.no-copy-code}
235+
236+
Select **Yes, proceed** and press <kbd>Enter</kbd>.
237+
238+
Expected output:
239+
240+
```text
241+
• Ran codex exec --model gpt-4o "Hello"
242+
└ OpenAI Codex v0.55.0 (research preview)
243+
--------
244+
… +12 lines
245+
6.468
246+
Hi there! How can I assist you today?
247+
248+
─ Worked for 9s ────────────────────────────────────────────────────────────────
249+
250+
• codex exec --model gpt-4o "Hello" returned: “Hi there! How can I assist you today?”
251+
```
252+
{.no-copy-code}
253+
254+
1. Check that LLM traffic went through Kong AI Gateway:
255+
256+
```sh
257+
docker exec kong-quickstart-gateway cat /tmp/file.json | jq
258+
```
259+
260+
Look for entries similar to:
261+
262+
```json
263+
{
264+
...
265+
"ai": {
266+
"proxy": {
267+
"tried_targets": [
268+
{
269+
"ip": "0000.000.000.000",
270+
"route_type": "llm/v1/responses",
271+
"port": 443,
272+
"upstream_scheme": "https",
273+
"host": "api.openai.com",
274+
"upstream_uri": "/v1/responses",
275+
"provider": "openai"
276+
}
277+
]
278+
}
279+
}
280+
...
281+
}
282+
```

0 commit comments

Comments
 (0)