From 33d40684655ffe0afdccc67915e595bdcbae0cb5 Mon Sep 17 00:00:00 2001 From: Ado Kukic Date: Tue, 5 Nov 2024 11:59:41 -0800 Subject: [PATCH 1/4] Hackathon - FAQs --- docs/cody/faq.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/cody/faq.mdx b/docs/cody/faq.mdx index 4fcacca34..3188ff434 100644 --- a/docs/cody/faq.mdx +++ b/docs/cody/faq.mdx @@ -1,3 +1,4 @@ +{/* Hackathon */} # Cody FAQs

Find answers to the most common questions about Cody.

From 42cd08f2c35591bc2d02650af36e2c6b7b44c135 Mon Sep 17 00:00:00 2001 From: Justin Dorfman Date: Tue, 12 Nov 2024 05:28:15 -0800 Subject: [PATCH 2/4] feat(docs): Add FAQ for OpenAI o1 best practices - Provide guidance on context management, model selection, prompting strategy, response time, and quality for using OpenAI o1 models - Document known technical limitations of o1 models --- docs/cody/faq.mdx | 44 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/docs/cody/faq.mdx b/docs/cody/faq.mdx index 3188ff434..0449c3ccf 100644 --- a/docs/cody/faq.mdx +++ b/docs/cody/faq.mdx @@ -137,3 +137,47 @@ cody chat --model '$name_of_the_model' -m 'Hi Cody!' ``` For example, to use Claude 3.5 Sonnet, you'd pass the following command in your terminal, `cody chat --model 'claude-3.5-sonnet' -m 'Hi Cody!' + +## OpenAI o1 + +### What are OpenAI o1 best practices? + +#### Context Management + +- Provide focused, relevant context +- Use file references strategically +- Keep initial requests concise + +#### Model Selection + +- O1-preview: Best for complex reasoning and planning +- O1-mini: More reliable for straightforward tasks +- Sonnet 3.5: Better for tasks requiring longer outputs + +#### Prompting Strategy + +- Be specific and direct +- Include complete requirements upfront +- Request brief responses when possible +- Use step-by-step approach for complex tasks + +#### Response Time + +- Start with smaller contexts +- Use O1-mini for faster responses +- Consider breaking complex tasks into stages + +#### Quality + +- Provide clear acceptance criteria +- Include relevant context but avoid excess +- Use specific examples when possible + +### What are the known limitations? + +#### Technical Constraints + +- 45k input token limit +- 4k output token limit +- No streaming responses for O1 series +- Limited context window From 217cb49445eee72f9a627d307a05eed3f43dac07 Mon Sep 17 00:00:00 2001 From: Maedah Batool Date: Tue, 12 Nov 2024 18:43:55 -0800 Subject: [PATCH 3/4] rmv banner --- docs/cody/faq.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/cody/faq.mdx b/docs/cody/faq.mdx index 0449c3ccf..502befca9 100644 --- a/docs/cody/faq.mdx +++ b/docs/cody/faq.mdx @@ -1,4 +1,3 @@ -{/* Hackathon */} # Cody FAQs

Find answers to the most common questions about Cody.

From 5c62977676a60d9c91e42c688bc0dc57c39f43e4 Mon Sep 17 00:00:00 2001 From: Maedah Batool Date: Tue, 12 Nov 2024 18:46:04 -0800 Subject: [PATCH 4/4] Add imporvements --- docs/cody/faq.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/cody/faq.mdx b/docs/cody/faq.mdx index 502befca9..fee3d1dff 100644 --- a/docs/cody/faq.mdx +++ b/docs/cody/faq.mdx @@ -149,9 +149,9 @@ For example, to use Claude 3.5 Sonnet, you'd pass the following command in your #### Model Selection -- O1-preview: Best for complex reasoning and planning -- O1-mini: More reliable for straightforward tasks -- Sonnet 3.5: Better for tasks requiring longer outputs +- **o1-preview**: Best for complex reasoning and planning +- **o1-mini**: More reliable for straightforward tasks +- **Sonnet 3.5**: Better for tasks requiring longer outputs #### Prompting Strategy @@ -163,7 +163,7 @@ For example, to use Claude 3.5 Sonnet, you'd pass the following command in your #### Response Time - Start with smaller contexts -- Use O1-mini for faster responses +- Use o1-mini for faster responses - Consider breaking complex tasks into stages #### Quality @@ -178,5 +178,5 @@ For example, to use Claude 3.5 Sonnet, you'd pass the following command in your - 45k input token limit - 4k output token limit -- No streaming responses for O1 series +- No streaming responses for o1 series - Limited context window