Skip to content

Conversation

@0xrushi
Copy link

@0xrushi 0xrushi commented Oct 11, 2025

What does this PR do?

This PR introduces the ability to force LLMs to select from a predefined list of options using log probability scores (logprobs). This feature is especially useful for classification or structured generation tasks where the output space is discrete and known in advance (e.g., "apple", "orange", "banana").

Resolves: #142

Quick test:

from adalflow import Generator
from adalflow.components.model_client import OpenAIClient

def test_logprobs():
    generator = Generator(
        model_client=OpenAIClient(),
        model_kwargs={"model": "gpt-3.5-turbo", "temperature": 0.1},
        template="Classify the sentiment: {{input_str}}"
    )
    
    test_text = "This is absolutely amazing!"
    
    print(f"Input: '{test_text}'")
    print("Options: ['positive', 'negative', 'neutral']")
    
    result = generator.select_from_options(
        options=["positive", "negative", "neutral"],
        prompt_kwargs={"input_str": test_text}
    )
    print(f"Result: '{result}'")
    
    
    # Call call_with_logprobs directly with raw input (bypass convert_inputs_to_api_kwargs)
    completion, logprobs = generator.model_client.call_with_logprobs(
        input=test_text,  # Raw input string
        model_kwargs={"model": "gpt-3.5-turbo", "temperature": 0.1},
        model_type=generator.model_type,
    )
    
    for i, choice_logprobs in enumerate(logprobs):
        print(f"  Choice {i+1}: {len(choice_logprobs)} tokens")
        for j, token_logprob in enumerate(choice_logprobs[:3]):
            print(f"    Token {j+1}: '{token_logprob.token}' (logprob: {token_logprob.logprob:.3f})")


if __name__ == "__main__":
    test_logprobs()
Input: 'This is absolutely amazing!'
Options: ['positive', 'negative', 'neutral']
Result: 'positive'
  Choice 1: 12 tokens
    Token 1: 'Thank' (logprob: -0.553)
    Token 2: ' you' (logprob: 0.000)
    Token 3: ' so' (logprob: -0.664)
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you have fun?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[logprob] to force llm to select from a list of options as output.

1 participant