-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Use GitHub Models in code #186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Thanks for contributing @flcdrg! We will review the pull request and get back to you soon. |
Check Broken PathsWe have automatically detected the following broken relative paths in your files. Check the file paths and associated broken paths inside them.
|
The documentation suggests that the code is using GitHub Models, but the current implementation appears to rely on Ollama being present. This change updates the code to use GitHub Models
Check Broken PathsWe have automatically detected the following broken relative paths in your files. Check the file paths and associated broken paths inside them.
|
Check Broken URLsWe have automatically detected the following broken URLs in your files. Review and fix the paths to resolve this issue. Check the file paths and associated broken URLs inside them.
|
1 similar comment
Check Broken URLsWe have automatically detected the following broken URLs in your files. Review and fix the paths to resolve this issue. Check the file paths and associated broken URLs inside them.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR replaces the Ollama-based embedding generator with the GitHub Models inference API via the Azure AI Inference client and updates documentation to match.
- Swap out the Microsoft.Extensions.AI.Ollama package for AzureAIInference
- Update
Program.cs
to useEmbeddingsClient
with a GitHub token - Refresh the markdown guide to show the new setup
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
03-CoreGenerativeAITechniques/src/RAGSimple-02MEAIVectorsMemory/RAGSimple-02MEAIVectorsMemory.csproj | Change package reference from Ollama to AzureAIInference |
03-CoreGenerativeAITechniques/src/RAGSimple-02MEAIVectorsMemory/Program.cs | Use EmbeddingsClient and AzureKeyCredential instead of OllamaEmbeddingGenerator |
03-CoreGenerativeAITechniques/02-retrieval-augmented-generation.md | Update example to match new AzureAIInference code |
Comments suppressed due to low confidence (2)
03-CoreGenerativeAITechniques/02-retrieval-augmented-generation.md:88
- [nitpick] It would be helpful to add a brief note before this snippet instructing readers to set the
GITHUB_TOKEN
environment variable and include theusing Azure.Core;
andusing Azure.AI.Inference;
directives in their code.
var githubToken = Environment.GetEnvironmentVariable("GITHUB_TOKEN") ?? throw new InvalidOperationException("GITHUB_TOKEN environment variable is not set.");
03-CoreGenerativeAITechniques/src/RAGSimple-02MEAIVectorsMemory/Program.cs:1
- The code uses
AzureKeyCredential
but only importsAzure
. You should addusing Azure.Core;
(or replaceusing Azure;
withusing Azure.Core;
) so the compiler can resolveAzureKeyCredential
.
using Azure;
The documentation suggests that the code is using GitHub Models, but the current implementation appears to rely on Ollama being present. This change updates the code to use GitHub Models.
Without this change, the existing code doesn't work on the regular "C# (.NET)" devcontainer and you need to use the "C# (.NET) - Ollama" as a workaround (which takes longer to load).
This is also a partial fix for #122