Skip to content

[bug] Inconsistent call of get_[async]_llm_ask #1327

@CalebCourier

Description

@CalebCourier

Describe the bug
In Guard._exec, we avoid calling get_llm_ask if llm_api or model are None. This avoids the import of litellm which has potentially undesired side-effects like a GET call to fetch model pricing. The same logic DOES NOT exist in AsyncGuard which means users cannot currently avoid the litellm import when running asynchronously. This should be fixable using the same logic we do in Guard.

To Reproduce

import logging;
logging.basicConfig(level=logging.INFO)

import asyncio
from guardrails import AsyncGuard

aguard = AsyncGuard()  # type: ignore

res = asyncio.run(
    aguard.validate("hello world!")
)

print(res)

Expected behavior
Unnecessary calls should be avoided. In general, the fault here lies with the call coming from LiteLLM's import chain, but we should also avoid unused imports.

Library version:
Version 0.6.x

NOTE
It appears that the latest version of LiteLLM 1.77.x does not perform this call on import.

Also See
BerriAI/litellm#10293

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions