Skip to content

Commit 28b3b80

Browse files
kukicadojdorfmanMaedahBatool
authored
Hackathon - FAQs (#786)
<!-- Explain the changes introduced in your PR --> ## Pull Request approval You will need to get your PR approved by at least one member of the Sourcegraph team. For reviews of docs formatting, styles, and component usage, please tag the docs team via the #docs Slack channel. --------- Co-authored-by: Justin Dorfman <[email protected]> Co-authored-by: Maedah Batool <[email protected]>
1 parent b4e1c8b commit 28b3b80

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed

docs/cody/faq.mdx

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -136,3 +136,47 @@ cody chat --model '$name_of_the_model' -m 'Hi Cody!'
136136
```
137137

138138
For example, to use Claude 3.5 Sonnet, you'd pass the following command in your terminal, `cody chat --model 'claude-3.5-sonnet' -m 'Hi Cody!'
139+
140+
## OpenAI o1
141+
142+
### What are OpenAI o1 best practices?
143+
144+
#### Context Management
145+
146+
- Provide focused, relevant context
147+
- Use file references strategically
148+
- Keep initial requests concise
149+
150+
#### Model Selection
151+
152+
- **o1-preview**: Best for complex reasoning and planning
153+
- **o1-mini**: More reliable for straightforward tasks
154+
- **Sonnet 3.5**: Better for tasks requiring longer outputs
155+
156+
#### Prompting Strategy
157+
158+
- Be specific and direct
159+
- Include complete requirements upfront
160+
- Request brief responses when possible
161+
- Use step-by-step approach for complex tasks
162+
163+
#### Response Time
164+
165+
- Start with smaller contexts
166+
- Use o1-mini for faster responses
167+
- Consider breaking complex tasks into stages
168+
169+
#### Quality
170+
171+
- Provide clear acceptance criteria
172+
- Include relevant context but avoid excess
173+
- Use specific examples when possible
174+
175+
### What are the known limitations?
176+
177+
#### Technical Constraints
178+
179+
- 45k input token limit
180+
- 4k output token limit
181+
- No streaming responses for o1 series
182+
- Limited context window

0 commit comments

Comments
 (0)