Skip to content

Commit 42cd08f

Browse files
committed
feat(docs): Add FAQ for OpenAI o1 best practices
- Provide guidance on context management, model selection, prompting strategy, response time, and quality for using OpenAI o1 models - Document known technical limitations of o1 models
1 parent 33d4068 commit 42cd08f

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed

docs/cody/faq.mdx

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,3 +137,47 @@ cody chat --model '$name_of_the_model' -m 'Hi Cody!'
137137
```
138138

139139
For example, to use Claude 3.5 Sonnet, you'd pass the following command in your terminal, `cody chat --model 'claude-3.5-sonnet' -m 'Hi Cody!'
140+
141+
## OpenAI o1
142+
143+
### What are OpenAI o1 best practices?
144+
145+
#### Context Management
146+
147+
- Provide focused, relevant context
148+
- Use file references strategically
149+
- Keep initial requests concise
150+
151+
#### Model Selection
152+
153+
- O1-preview: Best for complex reasoning and planning
154+
- O1-mini: More reliable for straightforward tasks
155+
- Sonnet 3.5: Better for tasks requiring longer outputs
156+
157+
#### Prompting Strategy
158+
159+
- Be specific and direct
160+
- Include complete requirements upfront
161+
- Request brief responses when possible
162+
- Use step-by-step approach for complex tasks
163+
164+
#### Response Time
165+
166+
- Start with smaller contexts
167+
- Use O1-mini for faster responses
168+
- Consider breaking complex tasks into stages
169+
170+
#### Quality
171+
172+
- Provide clear acceptance criteria
173+
- Include relevant context but avoid excess
174+
- Use specific examples when possible
175+
176+
### What are the known limitations?
177+
178+
#### Technical Constraints
179+
180+
- 45k input token limit
181+
- 4k output token limit
182+
- No streaming responses for O1 series
183+
- Limited context window

0 commit comments

Comments
 (0)