Skip to content

Conversation

@alabulei1
Copy link
Member

This document outlines the transition to using Groq as the LLM provider for EchoKit, highlighting its speed advantages and providing setup instructions.

This document outlines the transition to using Groq as the LLM provider for EchoKit, highlighting its speed advantages and providing setup instructions.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds documentation for Day 11 of the "30 Days with EchoKit" series, focusing on switching to Groq as the LLM provider to achieve faster inference speeds. The document provides setup instructions and highlights the performance benefits of using Groq's LPU hardware with EchoKit.

Key Changes

  • New documentation file explaining Groq integration benefits and speed improvements
  • Step-by-step configuration instructions for switching to Groq as the LLM provider
  • Docker and local Rust server restart commands for applying the configuration changes

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +36 to +40
model = "openai/gpt-oss-120b"
history = 5
```

Replace the LLM endpoint URL, API key and model name. [The production models from Groq](https://console.groq.com/docs/models) are `llama-3.1-8b-instant`, `llama-3.3-70b-versatile`, `meta-llama/llama-guard-4-12b`, `openai/gpt-oss-120b`, and `openai/gpt-oss-20b`.
Copy link

Copilot AI Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model name 'openai/gpt-oss-120b' is listed as a Groq production model on line 40, but this appears to be incorrect. Groq specializes in running open-source models like Llama, not OpenAI models. Verify this model name exists in Groq's API or use a valid Groq model like 'llama-3.1-8b-instant' or 'llama-3.3-70b-versatile' which are mentioned on line 40.

Suggested change
model = "openai/gpt-oss-120b"
history = 5
```
Replace the LLM endpoint URL, API key and model name. [The production models from Groq](https://console.groq.com/docs/models) are `llama-3.1-8b-instant`, `llama-3.3-70b-versatile`, `meta-llama/llama-guard-4-12b`, `openai/gpt-oss-120b`, and `openai/gpt-oss-20b`.
model = "llama-3.1-8b-instant"
history = 5

Replace the LLM endpoint URL, API key and model name. The production models from Groq are llama-3.1-8b-instant, llama-3.3-70b-versatile, meta-llama/llama-guard-4-12b.

Copilot uses AI. Check for mistakes.
@alabulei1 alabulei1 merged commit da8ee5f into main Dec 4, 2025
1 of 2 checks passed
@alabulei1 alabulei1 deleted the alabulei1-patch-8 branch December 8, 2025 07:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants