-
I am trying to implement a very simple use case where the conversational AI has access to a set of custom "skills" that I've imported into the kernel. At an API level, this would be done by passing the However, after running the code below and intercepting the HTTP messages, it seems like the library isn't sending the functions to the OpenAI server: var kernel = Kernel.Builder
.WithOpenAIChatCompletionService("gpt-4", Environment.GetEnvironmentVariable("OPENAI_KEY")!, httpClient: httpClient)
.Build();
kernel.ImportSkill(new TestSkill(), "user");
var chatCompletion = kernel.GetService<IChatCompletion>();
var chat = chatCompletion.CreateNewChat();
chat.AddSystemMessage("You are useful assistant.");
var chatSettings = new ChatRequestSettings { MaxTokens = 1024 };
while (true)
{
string userMessage = GetUserPrompt();
chat.AddUserMessage(userMessage);
var replyStream = chatCompletion.GenerateMessageStreamAsync(chat, chatSettings);
var answerBuilder = new StringBuilder();
await foreach (var streamingResult in replyStream)
ProcessAssistantResponse(streamingResult);
}
public class TestSkill
{
[SKFunction, Description("Retrieves the current username")]
public string GetUserName() => Environment.UserName;
} I have seen samples in this repo where a prompt is invoked against a specific skill in the kernel, but I want GPT to intelligently choose which skill to use based on their description, and my prompt. I know this is possible with the API, because in the past I've used a different OpenAI wrapper to achieve exactly this, except I had to manually write the skill parsing engine using reflection to extract all the skill methods and their descriptions with a custom attribute (similar to Any help is appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Tagging @markwallace-microsoft for visibility. |
Beta Was this translation helpful? Give feedback.
Tagging @markwallace-microsoft for visibility.