Skip to content

Commit db65ddf

Browse files
justonfshikhar-cyber
authored andcommitted
Fix typo on code-review cookbook and adjust AGENTS.md (#2223)
1 parent 024578e commit db65ddf

File tree

2 files changed

+34
-10
lines changed

2 files changed

+34
-10
lines changed

AGENTS.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,38 @@
11
# Repository Guidelines
22

33
## Project Structure & Module Organization
4+
45
The cookbook is organized around runnable examples and reference articles for OpenAI APIs. Place notebooks and Python scripts under `examples/<topic>/`, grouping related assets inside topic subfolders (for example, `examples/agents_sdk/`). Narrative guides and long-form docs live in `articles/`, and shared diagrams or screenshots belong in `images/`. Update `registry.yaml` whenever you add content so it appears on cookbook.openai.com, and add new author metadata in `authors.yaml` if you want custom attribution. Keep large datasets outside the repo; instead, document how to fetch them in the notebook.
56

67
## Build, Test, and Development Commands
8+
79
Use a virtual environment to isolate dependencies:
10+
811
- `python -m venv .venv && source .venv/bin/activate`
912
- `pip install -r examples/<topic>/requirements.txt` (each sample lists only what it needs)
1013
- `jupyter lab` or `jupyter notebook` to develop interactively
1114
- `python .github/scripts/check_notebooks.py` to validate notebook structure before pushing
1215

1316
## Coding Style & Naming Conventions
17+
1418
Write Python to PEP 8 with four-space indentation, descriptive variable names, and concise docstrings that explain API usage choices. Name new notebooks with lowercase, dash-or-underscore-separated phrases that match their directory—for example `examples/gpt-5/prompt-optimization-cookbook.ipynb`. Keep markdown cells focused and prefer numbered steps for multi-part workflows. Store secrets in environment variables such as `OPENAI_API_KEY`; never hard-code keys inside notebooks.
1519

1620
## Testing Guidelines
21+
1722
Execute notebooks top-to-bottom after installing dependencies and clear lingering execution counts before committing. For Python modules or utilities, include self-check cells or lightweight `pytest` snippets and show how to run them (for example, `pytest examples/object_oriented_agentic_approach/tests`). When contributions depend on external services, mock responses or gate the cells behind clearly labeled opt-in flags.
1823

1924
## Commit & Pull Request Guidelines
25+
2026
Use concise, imperative commit messages that describe the change scope (e.g., "Add agent portfolio collaboration demo"). Every PR should provide a summary, motivation, and self-review, and must tick the registry and authors checklist from `.github/pull_request_template.md`. Link issues when applicable and attach screenshots or output snippets for UI-heavy content. Confirm CI notebook validation passes locally before requesting review.
2127

2228
## Metadata & Publication Workflow
29+
2330
New or relocated content must have an entry in `registry.yaml` with an accurate path, date, and tag set so the static site generator includes it. When collaborating, coordinate author slugs in `authors.yaml` to avoid duplicates, and run `python -m yaml lint registry.yaml` (or your preferred YAML linter) to catch syntax errors before submitting.
31+
32+
## Review Guidelines
33+
34+
- Verify file, function, and notebook names follow the repo's naming conventions and clearly describe their purpose.
35+
- Scan prose and markdown for typos, broken links, and inconsistent formatting before approving.
36+
- Check that code identifiers remain descriptive (no leftover placeholder names) and that repeated values are factored into constants when practical.
37+
- Ensure notebooks or scripts document any required environment variables instead of hard-coding secrets or keys.
38+
- Confirm metadata files (`registry.yaml`, `authors.yaml`) stay in sync with new or relocated content.

examples/codex/build_code_review_with_codex_sdk.md

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,22 @@
11
# Build Code Review with the Codex SDK
22

3-
With [Code Review](https://chatgpt.com/codex/settings/code-review) in Codex Cloud, you can connect your team's cloud hosted Github repository to Codex and receive automated code reviews on every PR. But what if your code is hosted on-prem, or you don't have Github as an SCM?
3+
With [Code Review](https://chatgpt.com/codex/settings/code-review) in Codex Cloud, you can connect your team's cloud hosted GitHub repository to Codex and receive automated code reviews on every PR. But what if your code is hosted on-prem, or you don't have GitHub as an SCM?
44

5-
Luckily, we can replicate Codex's cloud hosted review process in our own CI/CD runners. In this guide, we'll build our own Code Review action using the Codex CLI headless mode with both Github actions and Jenkins.
5+
Luckily, we can replicate Codex's cloud hosted review process in our own CI/CD runners. In this guide, we'll build our own Code Review action using the Codex CLI headless mode with both GitHub Actions and Jenkins.
66

77
To build our own Code review, we'll take the following steps:
8+
89
1. Install the Codex CLI in our CI/CD runner
9-
1. Prompt Codex in headless (exec) mode with the Code Review prompt that ships with the CLI
10-
1. Specify a structured output JSON schema for Codex
11-
1. Parse the JSON result and use it to make API calls to our SCM to create review comments
10+
2. Prompt Codex in headless (exec) mode with the Code Review prompt that ships with the CLI
11+
3. Specify a structured output JSON schema for Codex
12+
4. Parse the JSON result and use it to make API calls to our SCM to create review comments
1213

1314
Once implemented, Codex will be able to leave inline code review comments:
14-
<img src="../../images/codex_code_review.png" alt="Codex Code Review in Github" width="500"/>
15+
<img src="../../images/codex_code_review.png" alt="Codex Code Review in GitHub" width="500"/>
1516

1617
## The Code Review Prompt
17-
GPT-5-Codex has received specific training to improve is code review abilities. You can steer GPT-5-Codex to conduct a code review with the following prompt:
18+
19+
GPT-5-Codex has received specific training to improve its code review abilities. You can steer GPT-5-Codex to conduct a code review with the following prompt:
1820

1921
```
2022
You are acting as a reviewer for a proposed code change made by another engineer.
@@ -25,7 +27,9 @@ Prioritize severe issues and avoid nit-level comments unless they block understa
2527
After listing findings, produce an overall correctness verdict (\"patch is correct\" or \"patch is incorrect\") with a concise justification and a confidence score between 0 and 1.
2628
Ensure that file citations and line numbers are exactly correct using the tools available; if they are incorrect your comments will be rejected.
2729
```
30+
2831
## Codex Structured Outputs
32+
2933
In order to make comments on code ranges in our pull request, we need to receive Codex's response in a specific format. To do that we can create a file called `codex-output-schema.json` that conforms to OpenAI's [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) format.
3034

3135
To use this file in our workflow YAML, we can call Codex with the `output-schema-file` argument like this:
@@ -49,8 +53,10 @@ You can also pass a similar argument to `codex exec` for example:
4953
codex exec "Review my pull request!" --output-schema codex-output-schema.json
5054
```
5155

52-
## Github Actions Example
53-
Let's put it all together. If you're using Github actions in an on-prem environment, you can tailor this example to your specific workflow. Inline comments highlight the key steps.
56+
## GitHub Actions Example
57+
58+
Let's put it all together. If you're using GitHub Actions in an on-prem environment, you can tailor this example to your specific workflow. Inline comments highlight the key steps.
59+
5460
```yaml
5561
name: Codex Code Review
5662
@@ -331,6 +337,7 @@ jobs:
331337
```
332338

333339
## Jenkins Example
340+
334341
We can use the same approach to scripting a job with Jenkins. Once again, comments highlight key stages of the workflow:
335342

336343
```groovy
@@ -650,5 +657,7 @@ pipeline {
650657
}
651658
}
652659
```
660+
653661
# Wrap Up
654-
With the Codex SDK, you can build your own Github Code Review in on-prem environments. However, the pattern of triggering Codex with a prompt, receiving a structured output, and then acting on that output with an API call extends far beyond Code Review. For example, we could use this pattern to trigger a root-cause analysis when an incident is created and post a structured report into a slack channel. Or we could create a code quality report on each PR and post results into a dashboard.
662+
663+
With the Codex SDK, you can build your own GitHub Code Review in on-prem environments. However, the pattern of triggering Codex with a prompt, receiving a structured output, and then acting on that output with an API call extends far beyond Code Review. For example, we could use this pattern to trigger a root-cause analysis when an incident is created and post a structured report into a Slack channel. Or we could create a code quality report on each PR and post results into a dashboard.

0 commit comments

Comments
 (0)