Math plugin - function return random result #5526
-
Hi But if I give it more simple math like 5/5 then it return "1" Is this kind of bug, how to force it return value from our plugin using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using System.ComponentModel;
using System.Text;
namespace SerenityFrameworkConsultant
{
public class MyMathPlugin
{
[KernelFunction, Description("Get math result")]
public string MathCalculator(string math) => "10";
}
public class MathAsking
{
public static async Task DoSomething()
{
string apikey = Environment.GetEnvironmentVariable("PRIVATE_OPENAI_KEY", EnvironmentVariableTarget.User)!;
// Initialize the kernel
IKernelBuilder kb = Kernel.CreateBuilder();
kb.AddOpenAIChatCompletion("gpt-3.5-turbo-0125", apikey);
kb.Services.AddLogging(c => c.AddConsole().SetMinimumLevel(LogLevel.Trace));
kb.Plugins.AddFromType<MyMathPlugin>();
Kernel kernel = kb.Build();
// Create a new chat
var ai = kernel.GetRequiredService<IChatCompletionService>();
ChatHistory chat = new();
StringBuilder builder = new();
OpenAIPromptExecutionSettings settings = new() { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };
while (true)
{
Console.Write("Question: ");
var question = Console.ReadLine();
ChatMessageContent chatResult = await ai.GetChatMessageContentAsync(question, settings, kernel);
Console.Write($"\n>>> Result: {chatResult}\n\n> ");
}
}
}
} |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
This is likely latent behavior of the model. For "complex" equations, the LLM will defer to what is returned by a tool call. If the AI is given something super easy (like 5/5), it will ignore it. This is similar to how a human might work. If you type in a calculator a complex equation, you'll accept the answer (whether it's right or wrong). For simpler equations though, you may choose to ignore it. |
Beta Was this translation helpful? Give feedback.
-
@matthewbolanos Correct me if I'm wrong, is there a way when we interact with LLM can we ask it answer from result of function as source of truth? We can make an order to LLM right? This is not problem of prompting I guess |
Beta Was this translation helpful? Give feedback.
This is likely latent behavior of the model. For "complex" equations, the LLM will defer to what is returned by a tool call. If the AI is given something super easy (like 5/5), it will ignore it. This is similar to how a human might work. If you type in a calculator a complex equation, you'll accept the answer (whether it's right or wrong). For simpler equations though, you may choose to ignore it.