-
Notifications
You must be signed in to change notification settings - Fork 0
Add documentation for using Groq with EchoKit #105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This document outlines the transition to using Groq as the LLM provider for EchoKit, highlighting its speed advantages and providing setup instructions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds documentation for Day 11 of the "30 Days with EchoKit" series, focusing on switching to Groq as the LLM provider to achieve faster inference speeds. The document provides setup instructions and highlights the performance benefits of using Groq's LPU hardware with EchoKit.
Key Changes
- New documentation file explaining Groq integration benefits and speed improvements
- Step-by-step configuration instructions for switching to Groq as the LLM provider
- Docker and local Rust server restart commands for applying the configuration changes
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| model = "openai/gpt-oss-120b" | ||
| history = 5 | ||
| ``` | ||
|
|
||
| Replace the LLM endpoint URL, API key and model name. [The production models from Groq](https://console.groq.com/docs/models) are `llama-3.1-8b-instant`, `llama-3.3-70b-versatile`, `meta-llama/llama-guard-4-12b`, `openai/gpt-oss-120b`, and `openai/gpt-oss-20b`. |
Copilot
AI
Dec 4, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name 'openai/gpt-oss-120b' is listed as a Groq production model on line 40, but this appears to be incorrect. Groq specializes in running open-source models like Llama, not OpenAI models. Verify this model name exists in Groq's API or use a valid Groq model like 'llama-3.1-8b-instant' or 'llama-3.3-70b-versatile' which are mentioned on line 40.
| model = "openai/gpt-oss-120b" | |
| history = 5 | |
| ``` | |
| Replace the LLM endpoint URL, API key and model name. [The production models from Groq](https://console.groq.com/docs/models) are `llama-3.1-8b-instant`, `llama-3.3-70b-versatile`, `meta-llama/llama-guard-4-12b`, `openai/gpt-oss-120b`, and `openai/gpt-oss-20b`. | |
| model = "llama-3.1-8b-instant" | |
| history = 5 |
Replace the LLM endpoint URL, API key and model name. The production models from Groq are llama-3.1-8b-instant, llama-3.3-70b-versatile, meta-llama/llama-guard-4-12b.
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
This document outlines the transition to using Groq as the LLM provider for EchoKit, highlighting its speed advantages and providing setup instructions.