Replies: 7 comments
-
hi @geffzhang I think the solution is already provider agnostic, isn't it? SearchClient depends on |
Beta Was this translation helpful? Give feedback.
-
thanks, I see repo only support openai/azure openai,I tried to support llamasharp these two days and it is indeed service agnostic. but how to support config service in SemanticMemoryConfig: config.DataIngestion.EmbeddingGeneratorTypes QdrantConfig should has vectorsize , 1536 is openai embedding vectorsize |
Beta Was this translation helpful? Give feedback.
-
Can we extract the ITextEmbeddingGeneration and ITextGeneration interfaces into a separate Nuget package so that AI providers can implement them on their own? |
Beta Was this translation helpful? Give feedback.
-
@geffzhang the integration with llamasharp should look something like this: using Microsoft.SemanticMemory;
using Microsoft.SemanticMemory.AI;
using Microsoft.SemanticMemory.MemoryStorage.Qdrant;
public class Program
{
public static void Main()
{
var llamaConfig = new LlamaConfig
{
// ...
};
var openAIConfig = new OpenAIConfig
{
EmbeddingModel = "text-embedding-ada-002",
APIKey = Env.Var("OPENAI_API_KEY")
};
var memory = new MemoryClientBuilder()
.WithCustomTextGeneration(new LlamaTextGeneration(llamaConfig))
.WithOpenAITextEmbedding(openAIConfig)
.WithQdrant(new QdrantConfig { /* ... */ });
// ...
}
}
public class LlamaConfig
{
// ...
}
public class LlamaTextGeneration : ITextGeneration
{
private readonly LlamaConfig _config;
public LlamaTextGeneration(LlamaConfig config)
{
this._config = config;
}
public IAsyncEnumerable<string> GenerateTextAsync(
string prompt,
TextGenerationOptions options,
CancellationToken cancellationToken = new())
{
// ...
}
} The vector size is handled automatically, as long as it is consistent across executions. The code above is using:
@xbotter it should be possible to use custom logic with the existing nuget. |
Beta Was this translation helpful? Give feedback.
-
Thanks to the great work from @xbotter, LLamaSharp is about to have an integration for kernel-memory. I'll appreciate it if someone of KM developers could help to review this PR. |
Beta Was this translation helpful? Give feedback.
-
done 👍 see v0.18 / PR #189 |
Beta Was this translation helpful? Give feedback.
-
@geffzhang I think this is now solved. The solution allows to customize text generation, embedding generation, tokenization and RAG parameters such as how many tokens can be used. Summarization also takes into account the model characteristics. As far as possible the code will also log errors or throw exceptions if some value is incorrect, e.g. trying to run an 8000 tokens prompt with a model that supports only 4096 tokens, or trying to generate embedding for a string 4000 tokens long with a model that supports only 2000 tokens, etc. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Semantic Kernel 1.0 is AI service agnostic. Semantic kernel now only support azure openai/openai , We should make the Semantic memory AI provider agnostic.
Beta Was this translation helpful? Give feedback.
All reactions